Cutshort logo
Data Team logo
Senior Data Engineer
Data Team
Senior Data Engineer
Data Team's logo

Senior Data Engineer

at Data Team

Agency job
8 - 12 yrs
₹10L - ₹20L / yr (ESOP available)
Remote only
Skills
Big Data
Data engineering
Hadoop
data engineer
Apache Hive
Apache Kafka
Senior Data Engineer (SDE)

(Hadoop, HDFS, Kafka, Spark, Hive)

Overall Experience - 8 to 12 years

Relevant exp on Big data - 3+ years in above

Salary: Max up-to 20LPA 

Job location - Chennai / Bangalore / 

Notice Period - Immediate joiner / 15-to-20-day Max 

The Responsibilities of The Senior Data Engineer Are:

- Requirements gathering and assessment

- Breakdown complexity and translate requirements to specification artifacts and story boards to build towards, using a test-driven approach

- Engineer scalable data pipelines using big data technologies including but not limited to Hadoop, HDFS, Kafka, HBase, Elastic

- Implement the pipelines using execution frameworks including but not limited to MapReduce, Spark, Hive, using Java/Scala/Python for application design.

- Mentoring juniors in a dynamic team setting

- Manage stakeholders with proactive communication upholding TheDataTeam's brand and values

A Candidate Must Have the Following Skills:

- Strong problem-solving ability

- Excellent software design and implementation ability

- Exposure and commitment to agile methodologies

- Detail oriented with willingness to proactively own software tasks as well as management tasks, and see them to completion with minimal guidance

- Minimum 8 years of experience

- Should have experience in full life-cycle of one big data application

- Strong understanding of various storage formats (ORC/Parquet/Avro)

- Should have hands on experience in one of the Hadoop distributions (Hortoworks/Cloudera/MapR)

- Experience in at least one cloud environment (GCP/AWS/Azure)

- Should be well versed with at least one database (MySQL/Oracle/MongoDB/Postgres)

- Bachelor's in Computer Science, and preferably, a Masters as well - Should have good code review and debugging skills

Additional skills (Good to have):

- Experience in Containerization (docker/Heroku)

- Exposure to microservices

- Exposure to DevOps practices - Experience in Performance tuning of big data applications
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Sigmoid
at Sigmoid
1 video
4 recruiters
Jayakumar AS
Posted by Jayakumar AS
Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹12L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

Sigmoid works with a variety of clients from start-ups to fortune 500 companies. We are looking for a detailed oriented self-starter to assist our engineering and analytics teams in various roles as a Software Development Engineer.


This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programming principles, experience in programming in Java, Python or similar languages and can expect to

spend a majority of their time coding.


Location - Bengaluru and Hyderabad


Responsibilities:

● Good development practices

○ Hands on coder with good experience in programming languages like Java or

Python.

○ Hands-on experience on the Big Data stack like PySpark, Hbase, Hadoop, Mapreduce and ElasticSearch.

○ Good understanding of programming principles and development practices like checkin policy, unit testing, code deployment

○ Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments

○ Excellent experience in Application development and support, integration development and data management.

● Align Sigmoid with key Client initiatives

○ Interface daily with customers across leading Fortune 500 companies to understand strategic requirements


● Stay up-to-date on the latest technology to ensure the greatest ROI for customer &Sigmoid

○ Hands on coder with good understanding on enterprise level code

○ Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems

○ Experience in defining technical requirements, data extraction, data

transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment


● Culture

○ Must be a strategic thinker with the ability to think unconventional /

out:of:box.

○ Analytical and data driven orientation.

○ Raw intellect, talent and energy are critical.


○ Entrepreneurial and Agile : understands the demands of a private, high growth company.

○ Ability to be both a leader and hands on "doer".


Qualifications: -

- Years of track record of relevant work experience and a computer Science or related technical discipline is required

- Experience with functional and object-oriented programming, Java must.

- hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.

- Effective communication skills (both written and verbal)

- Ability to collaborate with a diverse set of engineers, data scientists and product managers

- Comfort in a fast-paced start-up environment


Preferred Qualification:

- Technical knowledge in Map Reduce, Hadoop & GCS Stack a plus.

- Experience in agile methodology

- Experience with database modeling and development, data mining and warehousing.

- Experience in architecture and delivery of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, propose comprehensive solutions and guide junior staff

- Experience working with large, complex data sets from a variety of sources

Read more
Top Multinational Fintech Startup
Top Multinational Fintech Startup
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Cognologix Technologies
at Cognologix Technologies
14 recruiters
Priyal Wagh
Posted by Priyal Wagh
Remote, Pune
4 - 9 yrs
₹10L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more

You will work on: 

 

We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting-edge cloud-native technologies to crunch terabytes of data into meaningful insights. 

 

What you will do (Responsibilities):

 

Collaborate with product management & engineering to build highly efficient data pipelines. 

You will be responsible for:

 

  • Dealing with large customer data and building highly efficient pipelines
  • Building insights dashboards
  • Troubleshooting data loss, data inconsistency, and other data-related issues
  • Product development environment delivering stories in a scaled agile delivery methodology.

 

What you bring (Skills):

 

5+ years of experience in hands-on data engineering & large-scale distributed applications

 

  • Extensive experience in object-oriented programming languages such as Java or Scala
  • Extensive experience in RDBMS such as MySQL, Oracle, SQLServer, etc.
  • Experience in functional programming languages such as JavaScript, Scala, or Python
  • Experience in developing and deploying applications in Linux OS
  • Experience in big data processing technologies such as Hadoop, Spark, Kafka, Databricks, etc.
  • Experience in Cloud-based services such as Amazon AWS, Microsoft Azure, or Google Cloud Platform
  • Experience with Scrum and/or other Agile development processes
  • Strong analytical and problem-solving skills

 

Great if you know (Skills):

 

  • Some exposure to containerization technologies such as Docker, Kubernetes, or Amazon ECS/EKS
  • Some exposure to microservices frameworks such as Spring Boot, Eclipse Vert.x, etc.
  • Some exposure to NoSQL data stores such as Couchbase, Solr, etc.
  • Some exposure to Perl, or shell scripting.
  • Ability to lead R&D and POC efforts
  • Ability to learn new technologies on his/her own
  • Team player with self-drive to work independently
  • Strong communication and interpersonal skills

Advantage Cognologix:

  •  A higher degree of autonomy, startup culture & small teams
  •  Opportunities to become an expert in emerging technologies
  •  Remote working options for the right maturity level
  •  Competitive salary & family benefits
  •  Performance-based career advancement


About Cognologix: 

 

Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business-first approach to help meet our client’s strategic goals.

We are a Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern, and cloud-native way.

Benefits Working With Us:

  • Health & Wellbeing
  • Learn & Grow
  • Evangelize 
  • Celebrate Achievements
  • Financial Wellbeing
  • Medical and Accidental cover.
  • Flexible Working Hours.
  • Sports Club & much more.
Read more
Persistent Systems
at Persistent Systems
1 video
1 recruiter
Agency job
via Milestone Hr Consultancy by Haina khan
Bengaluru (Bangalore), Hyderabad, Pune
9 - 16 yrs
₹7L - ₹32L / yr
Big Data
skill iconScala
Spark
Hadoop
skill iconPython
+1 more
Greetings..
 
We have urgent requirement for the post of Big Data Architect in reputed MNC company
 
 


Location:  Pune/Nagpur,Goa,Hyderabad/Bangalore

Job Requirements:

  • 9 years and above of total experience preferably in bigdata space.
  • Creating spark applications using Scala to process data.
  • Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
  • Experience in spark job performance tuning and optimizations.
  • Should have experience in processing data using Kafka/Pyhton.
  • Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
  • Should be proficient in writing SQL queries to process data in Data Warehouse.
  • Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
  • Experience on AWS services like EMR.
Read more
QUT
QUT
Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
Bengaluru (Bangalore)
3 - 7 yrs
₹1L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
What You'll Bring

•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie

•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Read more
NoBroker
at NoBroker
1 video
26 recruiters
noor aqsa
Posted by noor aqsa
Bengaluru (Bangalore)
1 - 3 yrs
₹6L - ₹8L / yr
skill iconJava
Spark
PySpark
Data engineering
Big Data
+2 more
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Read more
European Bank headquartered at Copenhagen, Denmark.
European Bank headquartered at Copenhagen, Denmark.
Agency job
via Apical Mind by Rajeev T
NCR (Delhi | Gurgaon | Noida)
2 - 12 yrs
₹25L - ₹40L / yr
Data governance
DevOps
Data integration
Data engineering
skill iconPython
+14 more
Data Platforms (Data Integration) is responsible for envisioning, building and operating the Bank’s data integration platforms. The successful candidate will work out of Gurgaon as a part of a high performing team who is distributed across our two development centers – Copenhagen and Gurugram. The individual must be driven, passionate about technology and display a level of customer service that is second to none.

Roles & Responsibilities

  • Designing and delivering a best-in-class, highly scalable data governance platform
  • Improving processes and applying best practices
  • Contribute in all scrum ceremonies; assuming the role of ‘scum master’ on a rotational basis
  •  Development, management and operation of our infrastructure to ensure it is easy to deploy, scalable, secure and fault-tolerant
  • Flexible on working hours as per business needs
Read more
Simplifai Cognitive Solutions Pvt Ltd
Vipul Tiwari
Posted by Vipul Tiwari
Pune
3 - 8 yrs
₹5L - ₹30L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
Big Data
SQL
+3 more
Job Description for Data Scientist/ NLP Engineer

Responsibilities for Data Scientist/ NLP Engineer

Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer

• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required
Read more
A Product development Organisation
A Product development Organisation
Agency job
via Millions Advisory by Vasuki N
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
BDI Plus Lab
at BDI Plus Lab
2 recruiters
Silita S
Posted by Silita S
Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹12L / yr
Big Data
Hadoop
skill iconJava
skill iconPython
PySpark
+1 more

Roles and responsibilities:

 

  1. Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies.
  2. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc.
  3. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements..
  4. Collaborate with QA team to define test cases, metrics, and resolve questions about test results.
  5. Assist in the design and implementation process for new products, research and create POC for possible solutions.
  6. Develop components based on business and/or application requirements
  7. Create unit tests in accordance with team policies & procedures
  8. Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process
  9. Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues. 
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos