Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more
The job you are looking for has expired or has been deleted. Check out similar jobs below.

Similar jobs

Data Engineer

Founded 2010
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 5 years
Experience icon
Best in industry5 - 15 lacs/annum

Job Description In this role you will help us build, improve and maintain our huge data infrastructure where we collect TB's of logs daily. Data driven decisioning is crucial to the success of our customers and this role is central to ensuring we have a cutting edge data infrastructure to do things faster, better, and cheaper! Experience 1 - 3 Years Required Skills -Must be a polyglot with good command over Java, Scala and a scripting language -A non trivial project experience in distributed computing frameworks like Apache Spark/Hadoop/Pig/Kafka/Storm with sound knowledge of their internals -Expert knowledge of relational databases like MYSQL, and in-memory data stores like Redis -Regular participation in coding/hacking contests like Top-Coder, Code-Jam and Hacker-Cup is a huge plus Pre requisites -Strong analytical skills and solid foundation in Computer Science fundamentals specially in -DataStructures/Algorithms, Object Oriented principles, Operating Systems, Computer Networks -Ability and willingness to take ownership and work under minimum supervision, independently or as a part of a team -Passion for innovation and "Never Say Die" attitude -Strong verbal and written communication skills Education BTech/M.Tech/MS/Dual in Computer Science with above average academic credentials

Job posted by
apply for job
apply for job
Sachin Bhatevara picture
Sachin Bhatevara
Job posted by
Sachin Bhatevara picture
Sachin Bhatevara
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Experience icon
Best in industry7 - 12 lacs/annum

JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.

Job posted by
apply for job
apply for job
Zeimona Dsouza picture
Zeimona Dsouza
Job posted by
Zeimona Dsouza picture
Zeimona Dsouza
Apply for job
apply for job

Senior Software Engineer

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 6 years
Experience icon
Best in industry15 - 18 lacs/annum

Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams

Job posted by
apply for job
apply for job
tanika monga picture
tanika monga
Job posted by
tanika monga picture
tanika monga
Apply for job
apply for job

Big Data Administrator

Founded 1989
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years
Experience icon
Best in industry14 - 22 lacs/annum

As a Big Data Administrator, you’ll be responsible for the administration and governance of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Big Data Administrator understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage everyone to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate. Responsibilities: As a Big Data Engineer, your responsibilities may include, but are not limited to, the following: ● Build a scalable Big Data Platform designed to serve many different use-cases and requirements ● Build a highly scalable framework for ingesting, transforming and enhancing data at web scale ● Develop data structures and processes using components of the Hadoop ecosystem such as Avro, Hive, Parquet, Impala, Hbase, Kudu, Tez, etc. ● Establish automated build and deployment pipelines ● Implement machine learning models that enable customers to glean hidden insights about their data ● Implementing security and integrating with components such as LDAP, AD, Sentry, Kerberos. ● Strong understanding of row level and role based security concepts such as inheritance ● Establishing scalability benchmarks for predictable scalability thresholds. Qualifications: ● Bachelor's degree in Computer Science or related field ● 6+ years of system building experience ● 4+ years of programming experience using JVM based languages ● A passion for DevOps and an appreciation for continuous integration/deployment ● A passion for QA and an understanding that testing is not someone else’s responsibility ● Experience automating infrastructure and build processes ● Outstanding programming and problem solving skills ● Strong passion for technology and building great systems ● Excellent communication skills and ability to work using Agile methodologies ● Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment ● Experience with service-oriented (SOA) and event-driven (EDA) architectures ● Experience using big data solutions in an AWS environment ● Experience with noSQL data stores: Cassandra, HDFS and/or Elasticsearch ● Experience with javascript or associated frameworks Preferred skills: We value these qualities, but they’re not required for this role: ● Masters or Ph.D. in related field ● Experience as an open source contributor ● Experience with Akka, stream processing technologies and concurrency frameworks ● Experience with Data modeling ● Experience with Chef, Puppet, Ansible, Salt or equivalent ● Experience with Docker, Mesos and Marathon ● Experience with distributed messaging services, preferably Kafka ● Experience with distributed data processors, preferably Spark ● Experience with Angular, React, Redux, Immutable.js, Rx.js, Node.js or equivalent ● Experience with Reactive and/or Functional programming ● Understanding of Thrift, Avro or protocol buffers

Job posted by
apply for job
apply for job
Arjun Ravindran picture
Arjun Ravindran
Job posted by
Arjun Ravindran picture
Arjun Ravindran
Apply for job
apply for job

Bigdata Lead

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
2 - 5 years
Experience icon
Best in industry1 - 18 lacs/annum

Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation

Job posted by
apply for job
apply for job
Sandeep Chaudhary picture
Sandeep Chaudhary
Job posted by
Sandeep Chaudhary picture
Sandeep Chaudhary
Apply for job
apply for job

Technology Lead-JAVA/AWS

Founded 1997
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
7 - 14 years
Experience icon
Best in industry10 - 20 lacs/annum

Looking for JAVA Tech Lead-AWS/HAdoop Experienced Person. Product based firm preferred.Must have handled teaam size of 10plus people

Job posted by
apply for job
apply for job
Vartica Lal picture
Vartica Lal
Job posted by
Vartica Lal picture
Vartica Lal
Apply for job
apply for job

Data Architect

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
9 - 13 years
Experience icon
Best in industry10 - 23 lacs/annum

Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop

Job posted by
apply for job
apply for job
Sravanthi Alamuri picture
Sravanthi Alamuri
Job posted by
Sravanthi Alamuri picture
Sravanthi Alamuri
Apply for job
apply for job

Java Developer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Coimbatore
Experience icon
2 - 3 years
Experience icon
Best in industry2 - -6 lacs/annum

We are looking for good Java Developers, for our IOT platform. Those who are looking for a Change please drop in your profiles at www.fernlink.com

Job posted by
apply for job
apply for job
Jayaraj Esvar picture
Jayaraj Esvar
Job posted by
Jayaraj Esvar picture
Jayaraj Esvar
Apply for job
apply for job

SDE III - Data

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, Navi Mumbai
Experience icon
3 - 9 years
Experience icon
Best in industry12 - 30 lacs/annum

Your Role: · As an integral part of the Data Engineering team, be involved in the entire development lifecycle from conceptualization to architecture to coding to unit testing · Build realtime and batch analytics platform for analytics & machine-learning · Design, propose and develop solutions keeping the growing scale & business requirements in mind · Help us design the Data Model for our data warehouse and other data engineering solutions Must Have​: · Understands Data very well and has extensive Data Modelling experience · Deep understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) · Experience developing applications that work with NoSQL stores (e.g., ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) · Proven programming experience in Java or Scala · Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc · Experience in cloud based data stores like Redshift and Big Query is an advantage Bonus: · Love sports – especially cricket and football · Have worked previously in a high-growth tech startup

Job posted by
apply for job
apply for job
Vivek Pandey picture
Vivek Pandey
Job posted by
Vivek Pandey picture
Vivek Pandey
Apply for job
apply for job

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years
Experience icon
Best in industry3 - 25 lacs/annum

We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.

Job posted by
apply for job
apply for job
Blitzkrieg HR Consulting picture
Blitzkrieg HR Consulting
Job posted by
Blitzkrieg HR Consulting picture
Blitzkrieg HR Consulting
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.