Cutshort logo
Data engineering jobs

50+ Data engineering Jobs in India

Apply to 50+ Data engineering Jobs on CutShort.io. Find your next job, effortlessly. Browse Data engineering Jobs and apply today!

icon
Client based at Delhi/ NCR and Pune location.

Client based at Delhi/ NCR and Pune location.

Agency job
Hyderabad
8 - 12 yrs
₹20L - ₹30L / yr
Data engineering
Client Management
client engagement
Project Management
Project delivery
+8 more

Mandatory Skills- Data Engineer, client Engagement, Project Management, Project Delivery, Team Leadership, Data Governance, Quality Assurance, Business Development, Data Architecture

Additional Skills- Communication Skills, Problem Solving Skills

Job Description

This position requires someone with good problem solving, business understanding and client presence. Overall professional experience of the candidate should be above 8 years. A minimum of 5 years of experience in leading and managing a client portfolio in Data Engineering space. Should have good understanding of business operations, challenges faced, and business technology used across business functions. 

The candidate must understand the usage of traditional and modern data Engineering technologies/tools for solving business problems and help clients in their data journey. The candidate must have knowledge of emerging technologies for data management including data governance, data quality, security, data integration, processing, and provisioning. The candidate must possess required soft skills to work with teams and lead medium to large teams.

Candidate should be comfortable with taking leadership roles, in client projects, pre-sales/consulting, solutioning, business development conversations, execution on data engineering projects. 

Key Responsibilities:

Client Engagement & Relationship Management:

  • Serve as the primary point of contact for clients on data engineering projects, understanding their needs, challenges, and goals.
  • Develop and maintain strong client relationships, ensuring high levels of client satisfaction and repeat business.
  • Translate client requirements into actionable technical solutions and project plans.

 Project Management & Delivery:

  • Oversee the delivery of data engineering projects from inception to completion, ensuring projects are delivered on time, within scope, and within budget.
  • Manage project resources, timelines, and risks, ensuring smooth project execution and delivery.
  • Collaborate with cross-functional teams including data scientists, business analysts, and IT professionals to deliver comprehensive data solutions.

 Technical Leadership & Innovation:

  • Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs.
  • Stay abreast of industry trends, technologies, and best practices, and implement them in client projects to drive innovation and competitive advantage.
  • Provide technical oversight and guidance to the data engineering team, ensuring the adoption of best practices and high-quality output.

 Team Leadership & Development:

  • Lead, mentor, and manage a team of data engineers, fostering a collaborative and high-performance culture.
  • Provide professional development opportunities, coaching, and career growth support to team members.
  • Ensure the team is equipped with the necessary skills and tools to deliver high-quality consulting services.

 Data Governance & Quality Assurance:

  • Implement and oversee data governance frameworks, ensuring data integrity, security, and compliance across all client projects.
  • Establish and enforce data quality standards, ensuring the reliability and accuracy of data used in client solutions.
  • Business Development & Consulting:
  • Support business development efforts by contributing to proposals, presenting solutions to prospective clients, and identifying opportunities for expanding client engagements.
  • Provide thought leadership in data engineering, contributing to white papers, webinars, and conferences to enhance the company’s reputation in the industry.

 Experience candidates should bring

  • 8 to 12 years of data engineering experience with at least 3 years in a managerial role within a consulting or professional services environment.
  • Proven experience in managing multiple, complex data engineering projects simultaneously.
  • Experience in leading a team of 8 to 12 professionals.
  • Strong problem-solving skills and the ability to handle complex, ambiguous situations.
  • Exceptional project management skills, with experience in Agile methodologies.
  • A client-service mindset and a desire to take on tough and challenging projects
  • Effective communication skills, both written and verbal
  • Ability to work effectively across functions and levels; comfort collaborating with teammates in a virtual environment.

Required Qualification

Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.) 

Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Remote only
5 - 10 yrs
₹15L - ₹22L / yr
Data Warehouse (DWH)
Informatica
ETL
data lake
Data engineering
+12 more

Mandatory Skills - Data Engineering, Data Lake, Oracle BICC/BIP, Data Extraction, Etl Tools, Oracle Fusion, Big Data, Databricks,Data Ingestion, Medallion Architecture

Additional Skills- fivetran, Sksplashview, Git, Ci/cd Concepts

Job Description

Seeking an experienced Data Engineer who can play a crucial role in the company's fintech data lake project.

Technical/Functional Skills:

Must have

  • 5+ years of experience working in data warehousing systems
  • Strong experience in Oracle Fusion ecosystem, with strong data-extracting experience using Oracle BICC/BIP.
  • Must have good functional understanding of Fusion data structures.
  • Must have strong and proven data engineering experience in big data / Databricks environment
  • Must have hands-on experience building data ingestion pipelines from Oracle Fusion Cloud to a Databricks environment
  • Strong data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working in Databricks Medallion architecture
  • Capable of independently delivering work items and leading data discussions with Tech Leads & Business Analysts

Nice to have:

  • Experience with Fivetran or any equivalent data extraction tools is nice to have.
  • Experience in supporting Splash report development activities is a plus.
  • Prefer experience with Git, CI/CD tools, and code management processes


The candidate is expected to:

  • Follow engineering best practices, coding standards, and deployment processes.
  • Troubleshoot any performance, system or data related issues, and work to ensure data consistency and integrity.
  • Effectively communicate with users at all levels of the organization, both in written and verbal presentations.
  • Effectively communicate with other data engineers, help other team members with design and implementation activities.


Read more
B2B startup web platform

B2B startup web platform

Agency job
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 15 yrs
₹60L - ₹80L / yr
Artificial Intelligence (AI)
Backend testing
User Interface (UI) Design
Engineering Management
Data engineering

For a startup run by a serial founder:


Job Description :


- Architect and develop our B2B platform, focusing on delivering an exceptional user experience.


- Recruit and mentor an engineering team to build scalable and reliable technology solutions.


- Work in tandem with co-founders to ensure technology strategies are well-aligned with business objectives.


- Manage technical architecture to guarantee performance, scalability, and adaptability for future needs.


- Drive innovation by adopting advanced technologies into our product development cycle.


- Promote a culture of technical excellence and collaborative spirit within the engineering team.


Qualifications :


- Over 8 years of experience in technology team management roles, with a proven track record in software development and system architecture + Dev Ops


- Entrepreneurial mindset with a strong interest in building and scaling innovative products.


- Exceptional leadership abilities with experience in team building and mentorship.


- Strategic thinker with a history of delivering effective technology solutions.


Skills :


- Expertise in modern programming languages and application frameworks.


- Deep knowledge of cloud platforms, databases, and system design principles.


- Excellent analytical, problem-solving, and decision-making skills.


- Strong communication skills with the ability to lead cross-functional teams effectively.

Read more
The Modern Data Company
Remote only
5 - 11 yrs
₹35L - ₹55L / yr
genai
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Analytics
Data engineering

Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.

Read more
Incubyte

at Incubyte

4 recruiters
Tejas Thakker
Posted by Tejas Thakker
Remote only
1 - 3 yrs
₹8L - ₹18L / yr
Data Transformation Tool (DBT)
SQL
Windows Azure
MySQL
ETL
+6 more

Who are we?

 

We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.

 

What we are looking for

 

We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.

 

What you’ll be doing

 

First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.

 

You will work in a product team. Building products and rapidly rolling out new features and fixes.

 

You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!

 

Skills you need in order to succeed in this role


Most Important: Integrity of character, diligence and the commitment to do your best

Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development

Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing

 

Self-Learner: You must be extremely hands-on and obsessive about delivering clean code

 

  • Sense of Ownership: Do whatever it takes to meet development timelines
  • Experience in creating end to end data pipeline
  • Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
  • Working experience in Databricks
  • Strong in BI/DW/Datalake Architecture, design and ETL
  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Experience in object-oriented programming, data structures, algorithms and software engineering
  • Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
  • Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
  • Working knowledge of server configuration / deployment
  • Experience using source control and bug tracking systems,

   writing user stories and technical documentation

  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
  • Experience with database versioning, backups, restores and
  • Expertise in data security and
  • Ability to perform database performance tuning queries
Read more
product base company based at Bangalore location and working

product base company based at Bangalore location and working

Agency job
Remote only
4 - 9 yrs
₹20L - ₹30L / yr
Data Structures
Large Language Models (LLM) tuning
GPT
Llama2
Mistral
+9 more

We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.

Responsibilities

• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.

• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning

• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.

• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.

• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)

• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions

• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs

Qualifications Required

• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field

• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models

• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).

• Experience working with cloud-based platforms (AWS, GCP, Azure)

Additional Skills

• Excellent problem-solving and analytical abilities

• Strong communication skills, both written and verbal

• Ability to thrive in a collaborative and fast-paced environment

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
Best in industry
Data engineering
ADF
data factory
SQL Azure
databricks
+4 more

Data Engineer

 

Brief Posting Description:

This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.

 

Detailed Description:

·        Works with Scrum masters, product owners, and others to identify new features for digital products.

·        Works with IT leadership and business partners to design features for the cloud data platform.

·        Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.

·        Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.

·        Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.

·        Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.

·        Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.

·        Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.

·        Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.

·        Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.

 

 

 

 

Job Requirements:

 

EXPERIENCE:

2 years required; 3 - 5 years preferred experience in a data engineering role.

2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)

 

EDUCATION:

Bachelor’s degree information technology, computer science, or data related field preferred

 

SKILLS/REQUIREMENTS:

Expertise working with databases and SQL.

Strong working knowledge of Azure Data Factory and Databricks

Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)

Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)

Familiarity with Agile delivery methodologies

Familiarity with NoSQL databases (such as CosmosDB) preferred.

Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.

Ability to multi-task and reprioritize in a dynamic environment.

Outstanding written and verbal communication skills

 

Working Environment:

General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc. 

 

physical requirements:                     

Work is generally sedentary in nature but may require standing and walking for up to 10% of the time. 

 

Mental requirements:

Employee required to organize and coordinate schedules.

Employee required to analyze and interpret complex data.

Employee required to problem-solve. 

Employee required to communicate with the public.

Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Bhavya Das
Posted by Bhavya Das
Bengaluru (Bangalore)
5 - 7 yrs
₹10L - ₹15L / yr
Data Structures
Data engineering
NOSQL Databases
SQL
AWS CloudFormation
+5 more

Principal Engineer


Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence

Responsibilities:

  • Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
  • Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
  • Product Development - Contribute towards new product development through engineering solutions to business and market requirements. Interact with cross-functional teams to bring forward a technology perspective.
  • Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted service delivery.

Requirements:

  • Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
  • Proficient in python programming skills and expertise with data engineering and machine learning deployment
  • Experience in databases including MySQL and NoSQL
  • Experience in developing and maintaining critical and high availability systems will be given strong preference
  • Experience working with AWS cloud platform.
  • At Least 3 years experience leading a team of engineers and analysts
  • Strong analytical and data driven approach to problem solving


Experience: 5 - 7 years

Location: Bangalore




Read more
RAPTORX.AI

at RAPTORX.AI

2 candid answers
Pratyusha Vemuri
Posted by Pratyusha Vemuri
Hyderabad
5 - 7 yrs
₹10L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
Data Visualization
Graph Databases
Neo4J
+2 more

Role Overview

We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies. 

This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.

Responsibilities

  • Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
  • Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
  • Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
  • Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
  • Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
  • Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.

Requirements

  • 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
  • Expertise in Typescript, ReactJs, and familiarity with Python.
  • Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
  • Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
  • Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
  • A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.

Why Join Us?

  • Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
  • Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
  • Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
  • Work in an environment that values innovation, leadership, and the long-term success of its employees.


Read more
Wissen Technology
Mumbai
5 - 10 yrs
₹5L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Data engineering

Skills required


-5+ years of software development experience.

-Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

-Developing Data Pipelines (Batch/Streaming).Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

-Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

-Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

Proficient in Linux/Unix, including shell scripting.

-Ability to translate business and architectural features into quality, consistent software design.

-Solid understanding of programming practices, emphasizin

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Mumbai
4 - 9 yrs
₹15L - ₹32L / yr
skill iconJava
ETL
SQL
Data engineering
skill iconScala

Java/Scala + Data Engineer

 

Experience: 5-10 years

Location: Mumbai

Notice: Immediate to 30 days

Required Skills:

·       5+ years of software development experience.

·       Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

·       Developing Data Pipelines (Batch/Streaming). Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

·       Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

·       Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

·       Proficient in Linux/Unix, including shell scripting.

·       Ability to translate business and architectural features into quality, consistent software design.

·       Solid understanding of programming practices, emphasizing reusable, flexible, and reliable code.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Remote only
2 - 5 yrs
₹10L - ₹20L / yr
Data engineering

A candidate with 2 to 6 years of relevant experience in Snowflakes Data Cloud.

Mandatory Skills:

• Excellent knowledge in Snowflakes Data Cloud.

• Excellent knowledge in SQL

• Good knowledge in ETL

• Must have working knowledge of Azure Data Factory

• Must have general awareness of Azure Cloud

• Must be aware of optimization techniques in Data retrieval and Data loading.

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
AI Domain US Based Product Based Company

AI Domain US Based Product Based Company

Agency job
via New Era India by Asha P
Bengaluru (Bangalore)
3 - 10 yrs
₹30L - ₹50L / yr
Data engineering
Data modeling
skill iconPython

Requirements:

  • 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
  • Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
  • Demonstrated proficiency with Python, JavaScript and/or Java
  • Familiarity with version control/SCM is a must (experience with git is a plus).
  • Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
  • Strong organisational and troubleshooting skills with attention to detail.
  • Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.


Read more
SimpliFin
Bengaluru (Bangalore)
6 - 14 yrs
₹20L - ₹50L / yr
SaaS
Engineering Management
Artificial Intelligence (AI)
Data engineering
Financial services

We are looking for a passionate technologist with experience in building SaaS tech experience and products for a once-in-a-lifetime opportunity to lead Engineering for an AI powered Financial Operations platform to seamlessly monitor, optimize, reconcile and forecast cashflow with ease.


Background


An incredible rare opportunity for a VP Engineering to join a top tier incubated VC SaaS startup and outstanding management team. Product is currently in the build stage with a solid design partners pipeline of ~$250K and soon raising a pre-seed/seed round with marquee investors.


Responsibilities


  • Develop and implement the company's technical strategy and roadmap, ensuring that it aligns with the overall business objectives and is scalable, reliable, and secure.


  • Manage and optimize the company's technical resources, including staffing, software, hardware, and infrastructure, to ensure that they are being used effectively and efficiently.


  • Work with the founding team and other executives to identify opportunities for innovation and new technology solutions, and evaluate the feasibility and impact of these solutions on the business.


  • Lead the engineering function in developing and deploying high-quality software products and solutions, ensuring that they meet or exceed customer requirements and industry standards.


  • Analyze and evaluate technical data and metrics, identifying areas for improvement and implementing changes to drive efficiency and effectiveness.


  • Ensure that the company is in compliance with all legal and regulatory requirements, including data privacy and security regulations.


Eligibility criteria:


  • 6+ years of experience in developing scalable SaaS products.


  • Strong technical background with 6+ years of experience with a strong focus on SaaS, AI, and finance software.


  • Prior experience in leadership roles.


  • Entrepreneurial mindset, with a strong desire to innovate and grow a startup from the ground up.


Perks:


  • Vested Equity.


  • Ownership in the company.


  • Build alongside passionate and smart individuals.


Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore)
10 - 18 yrs
Best in industry
flink
apache flink
skill iconJava
Data engineering

1. Flink Sr. Developer


Location: Bangalore(WFO)


Mandatory Skills & Exp -10+ Years : Must have Hands on Experience on FLINK, Kubernetes , Docker, Microservices, any one of Kafka/Pulsar, CI/CD and Java.


Job Responsibilities:


As the Data Engineer lead, you are expected to engineer, develop, support, and deliver real-time


streaming applications that model real-world network entities, and have a good understanding of the


Telecom Network KPIs to improve the customer experience through automation of operational network


data. Real-time application development will include building stateful in-memory backends, real-time


streaming APIs , leveraging real-time databases such as Apache Druid.


 Architecting and creating the streaming data pipelines that will enrich the data and support


the use cases for telecom networks


 Collaborating closely with multiple stakeholders, gathering requirements and seeking


iterative feedback on recently delivered application features.


 Participating in peer review sessions to provide teammates with code review as well as


architectural and design feedback.


 Composing detailed low-level design documentation, call flows, and architecture diagrams


for the solutions you build.


 Running to a crisis anytime the Operations team needs help.


 Perform duties with minimum supervision and participate in cross-functional projects as


scheduled.


Skills:


 Flink Sr. Developer, who has implemented and dealt with failure scenarios of


processing data through Flink.


 Experience with Java, K8S, Argo CD/Workflow, Prometheus, and Aether.


 Familiarity with object-oriented design patterns.


 Experience with Application Development DevOps Tools.


 Experience with distributed cloud-native application design deployed on Kubernetes


platforms.


 Experience with PostGres, Druid, and Oracle databases.


 Experience with Messaging Bus - Kafka/Pulsar


 Experience with AI/ML - Kubeflow, JupyterHub


 Experience with building real-time applications which leverage streaming data.


 Experience with streaming message bus platforming, either Kafka or Pulsar.


 Experience with Apache Spark applications and Hadoop platforms.


 Strong problem solving skills.


 Strong written and oral communication skills.

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
TensorGo Software Private Limited
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
Mactores Cognition Private Limited
Remote only
2 - 15 yrs
₹6L - ₹40L / yr
skill iconAmazon Web Services (AWS)
PySpark
athena
Data engineering

As AWS Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts, and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions. 


If you love to solve problems using your skills, then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

What you will do?

  • Write efficient code in - PySpark, Amazon Glue
  • Write SQL Queries in - Amazon Athena, Amazon Redshift
  • Explore new technologies and learn new techniques to solve business problems creatively
  • Collaborate with many teams - engineering and business, to build better data products and services 
  • Deliver the projects along with the team collaboratively and manage updates to customers on time


What are we looking for?

  • 1 to 3 years of experience in Apache Spark, PySpark, Amazon Glue
  • 2+ years of experience in writing ETL jobs using pySpark, and SparkSQL
  • 2+ years of experience in SQL queries and stored procedures
  • Have a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+


You will be preferred if you have

  • Prior experience in working on AWS EMR, Apache Airflow
  • Certifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data Engineer
  • Understanding of DataOps Engineering


Life at Mactores


We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.


1. Be one step ahead

2. Deliver the best

3. Be bold

4. Pay attention to the detail

5. Enjoy the challenge

6. Be curious and take action

7. Take leadership

8. Own it

9. Deliver value

10. Be collaborative


We would like you to read more details about the work culture on https://mactores.com/careers 


The Path to Joining the Mactores Team

At Mactores, our recruitment process is structured around three distinct stages:


Pre-Employment Assessment: 

You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.


Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.


HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.


At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.

Read more
persistent

persistent

Agency job
via Bohiyaanam Talent Solutions LLP by TrishaDutt Tekgminus
Pune, Mumbai, Bengaluru (Bangalore), Indore, Kolkata
6 - 7 yrs
₹12L - ₹18L / yr
MuleSoft
ETL QA
Automation
Data engineering

I am looking for Mulesoft Developer for a reputed MNC

 

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

 

Skills:

Mulesoft

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Tredence
Bengaluru (Bangalore), Pune, Gurugram, Chennai
8 - 12 yrs
₹12L - ₹30L / yr
Snow flake schema
Snowflake
SQL
Data modeling
Data engineering
+1 more

JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:

• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.

• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.

ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.

• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication

Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.

 

About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality

Read more
Miracle Software Systems, Inc
Ratnakumari Modhalavalasa
Posted by Ratnakumari Modhalavalasa
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer

Duration : Full Time

Location : Vishakhapatnam, Bangalore, Chennai

years of experience : 3+ years

Job Description :

- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.

- Strong communications (written and verbal) along with being a good team player.

- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)

- 2+ years of strong experience with SQL and Python (Data Engineering focused).

- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.

- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more
Getinz

at Getinz

11 recruiters
kousalya k
Posted by kousalya k
Remote only
4 - 8 yrs
₹10L - ₹15L / yr
Penetration testing
skill iconPython
Powershell
Bash
Spark
+5 more
-3 + years of Red Team experience
-5+ years hands on experience with penetration testing would be added plus
-Strong Knowledge of programming or scripting languages, such as Python, PowerShell, Bash
-Industry certifications like OSCP and AWS are highly desired for this role
-Well-rounded knowledge in security tools, software and processes
Read more
Astegic

at Astegic

3 recruiters
Nikita Pasricha
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
skill iconScala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
Network Science
Leena Shirsale
Posted by Leena Shirsale
Mumbai, Navi Mumbai
5 - 8 yrs
₹20L - ₹25L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
skill iconData Science
+4 more
  • Collaborate with the business teams to understand the data environment in the organization; develop and lead the Data Scientists team to test and scale new algorithms through pilots and subsequent scaling up of the solutions
  • Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service
  • Act as the single point source for all data related queries; strong understanding of internal and external data sources; provide inputs in deciding data-schemas
  • Design, develop and maintain the framework for the analytics solutions pipeline
  • Provide inputs to the organization’s initiatives on data quality and help implement frameworks and tools for the various related initiatives
  • Work in cross-functional teams of software/machine learning engineers, data scientists, product managers, and others to build the AI ecosystem
  • Collaborate with the external organizations including vendors, where required, in respect of all data-related queries as well as implementation initiatives
Read more
Myrsa Technology Solutions
Dipali G
Posted by Dipali G
Remote, Thane
3 - 5 yrs
₹3L - ₹5L / yr
Analytical Skills
Data engineering
Product Management
Search Engine Optimization (SEO)

This role is focused on Growth initiatives at cure.fit. As a Growth PM, you will be responsible for driving growth in users, revenue via a data-driven, experiment-based, systematic approach to growth and execute on key growth initiatives to achieve growth targets. Key responsibilities: - Identify levers and opportunities using which we can have more users, higher sales, better engagement - Build and deploy experiments to capitalise on these opportunities - Identify what works and what doesn't and scale up/down experiments accordingly - Develop a deep understanding of customers and engagement loops - Productify key learning’s from the experiments - Participate in tactical sales or marketing initiatives - Over time, build a growth machinery and systematised approach to driving growth - Strengthen capabilities in key areas such as SEO/ASO/Content/Referral This is a foundational role on the team and you will have the one-of-a-kind opportunity to develop a deep understanding of the core business/engagement loops and have high degree of impact. PMs on the team enjoy high degree of responsibility and ownership. You will work closely with Sales, Marketing and Technology teams to conceptualise, plan, execute and productize growth initiatives.

 

Looking for:

Understanding of consumer behaviour and psychology

We are looking for PMs/Engineers/Marketers with 3+ years of PM/Software Engineering experience in building consumer software products. You are very customer focused and have a strong understanding of the customer behaviour and psychology to come up with ideas and hypothesis to drive growth. You care not just about the technology but also the psychology that makes great products.

 

Impact orientation, curiosity, and hacker-mindset

You are impact-oriented and are constantly looking to achieve scale and leverage by leveraging data and consumer insights. You are highly curious and eager to learn/develop intelligence - you are never satisfied by just knowing “what happened” but are keen to understand “why it happened”. You have a hacker mindset and a sense of hustle which you use for prioritizing and bringing ideas to life.

 

Strong analytical and scrappy technical skills

You have 3+ years of experience as a PM or as a Software Engineer focused on building consumer products, especially on mobile devices (apps, progressive web). You come with analytical and data operations skills (ETL) and you are largely self-sufficient in pulling and analyzing data and deriving sound conclusions.

 

Skill set:

  • Prior experience of 3+ years in building consumer products
  • Strong understanding of hypothesis-driven, data-backed experimentation
  • Strong analytical and data engineering skills
Read more
Opportunity with Largest Conglomerate

Opportunity with Largest Conglomerate

Agency job
via Seven N Half by Shreeja Shetty
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Kochi (Cochin)
15 - 20 yrs
₹30L - ₹60L / yr
Enterprise architecture
Data architecture
Data engineering
Technical Architecture

Digital and Technology Architecture

  • Head the digital transformation and innovation projects from idea generation to new ventures in close collaboration with business stakeholders and external partners
  • Present the vision & value of proposed architectures and solutions to a wide range of audiences in alignment with business priorities and objectives
  • Plans tasks and estimates for the required research and volume of activities to complete work
  • Own and assess non-functional requirements and propose solutions for Availability, Backup, Capacity, Performance, Redundancy, Reliability, Scalability, Supportability, Risks and Costs models
  • Provide strategic guidance to teams on managing third-party service providers in terms of service levels, costing, etc.
  • Drive the team to ensure appropriate documentation is developed in support of value realization
  • Lead the technical team and head collaboration between Business Users and software providers to build digital solutions

Enterprise Architecture

  • Ownership of overall Enterprise Architecture including compliances and standards
  • Head the overall Architecture blueprint and roadmap for  applications aligning with Enterprise Architecture
  • Identify important potential technologies and approaches to address current and future Enterprise needs, evaluating their applicability and fit, as well as leading the definition of standards and best practice for their use

Data Architecture

  • Ownership of overall Data Architecture including compliances and standards
  • Head the overall Architecture blueprint and roadmap applications aligning with Data Architecture
  • Identify important potential technologies and approaches to address current and future Data needs, evaluating their applicability and fit, as well as leading the definition of standards and best practice for their use
Read more
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Multinational Company

Multinational Company

Agency job
via Telamon HR Solutions by Praveena Sagar
Remote only
5 - 15 yrs
₹27L - ₹30L / yr
Data engineering
Google Cloud Platform (GCP)
skill iconPython

• The incumbent should have hands on experience in data engineering and GCP data technologies.

• Should Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform.

• Should Work with Agile and DevOps techniques and implementation approaches in the delivery.

• Showcase your GCP Data engineering experience when communicating with clients on their requirements, turning these into technical data solutions.

• Build and deliver Data solutions using GCP products and offerings.
• Have hands on Experience on Python 
Experience on SQL or MySQL. Experience on Looker is an added advantage.

Read more
DFCS Technologies

DFCS Technologies

Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹9L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
    • data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Quicken Inc

at Quicken Inc

2 recruiters
Shreelakshmi M
Posted by Shreelakshmi M
Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
SQL
skill iconPython
Data engineering

Since 1988 Quicken has been the top personal management software for millions of consumers. We pioneered a radically easier and faster way for people to manage their household finances. Since then we’ve continued to focus on delighting customers and making the Quicken product and experience better than ever.

Job Title:        Senior Data Engineer

Location:        Bangalore, India

Department:  Product Development

 

Quicken is the #1 personal finance management software with a 30-year heritage of helping millions of individuals and families stay on top of their finances. We are working on a strategy to deliver awesome personal finance experience to our customers across Windows, Mac, iOS, Android, and Web.

The successful candidate will join a fast-paced software development team building the next generation of Quicken Data Platform. The team uses the latest software development technology and tools.

If you are looking to be part of a high-performing team at the heart of a fun, energetic, and innovative company, come join the Quicken Team!

 

Responsibilities:

 

This is an opportunity to be a highly visible and key contributor on a small and passionate team delivering innovative data platform solutions across the company. You’ll help shape and deliver on an aggressive and innovative roadmap in areas key to Quicken’s continued success and growth.

  • Senior Data Engineer role is a technical hands-on role. The responsibilities range from being at the vanguard of solving technical problems to venturing into unchartered areas of technologies to solve complex problems.
  • Implement, or operate comprehensive data platform components to balance optimization of data access with batch loading and resource utilization factors, per customer requirements.
  • Develop robust data platform components for sourcing, loading, transformation, and extracting data from various sources.
  • Build metadata processes and frameworks.
  • Create supporting documentation, such as metadata and diagrams of entity relationships, business processes, and process flow.
  • Maintain standards, such as organization, structure, or nomenclature, for data platform elements, such as data architectures, pipelines, frameworks, models, tools, and databases.
  • Implement business rules via scripts, middleware, or other technologies.
  • Map data between source systems and data lake
  • Ability to be independent and product high quality code on components related to the Data Platform. Should also possess Creativity, Responsibility, and Autonomy.
  • Participate in the planning, design, and implementation of features, working with small teams that consist of engineers, product managers, and marketing.
  • Demonstrate strong technical talent throughout the organization and engineer products that meet future scalability, performance, security, and quality goals while maintaining a cohesive user experience across different components and products.
  • Adopt and share best practices of software development methodology and frameworks used in data platform.
  • Passion for continuous learning, experimenting and applying cutting edge technology and software paradigms. Also responsible for fostering this culture across the team.

 

Qualifications: 

 

  • 3+ years of hands-on experience with data platform technologies and tools
  • Extensive experience in Python
  • Should be comfortable with using REST APIs
  • Experience in  at  least  2  of  the  3  stages  of  any  big  data  pipeline  -  data

ingestion/acquisition, data processing/transformation and data visualization

  • Experience in  database  user  interface  and  query  software  -  Structured  query language (SQL)
  • Experience in one or more structured DBMS along with data modelling – MySQL
  • Nice to have working experience in big data processing frameworks like Spark
  • Nice to have working knowledge on visualization tools like Tableau, Kibana, Amazon QuickSight
  • Nice to have familiarity in AWS EMR, Kinesis, EC2, S3, AWS Glue
  • Experience working with geographically distributed teams across different time zones
  • Strong communication skills, both oral and written whether in-person or virtual
  • Experience with Agile methodologies
  • Bachelor’s degree in computer science or other technical discipline, or equivalent experience

 

What we offer:

  • Competitive salary and performance bonus
  • Amazing culture, strong believers in Autonomy/Mastery/Purpose
  • Customer-driven, we make money by building the best products for our users. No confusion about how to win – build amazing products!
  • Ability to work with and learn from some incredible talent
  • Highly recognizable brand
Read more
Velocity Services

at Velocity Services

2 recruiters
Newali Hazarika
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
BDI Plus Lab

at BDI Plus Lab

2 recruiters
Puja Kumari
Posted by Puja Kumari
Remote only
2 - 6 yrs
₹6L - ₹20L / yr
Apache Hive
Spark
skill iconScala
PySpark
Data engineering
+4 more
We are looking for big data engineers to join our transformational consulting team serving one of our top US clients in the financial sector. You'd get an opportunity to develop big data pipelines and convert business requirements to production grade services and products. With
lesser concentration on enforcing how to do a particular task, we believe in giving people the opportunity to think out of the box and come up with their own innovative solution to problem solving.
You will primarily be developing, managing and executing handling multiple prospect campaigns as part of Prospect Marketing Journey to ensure best conversion rates and retention rates. Below are the roles, responsibilities and skillsets we are looking for and if you feel these resonate with you, please get in touch with us by applying to this role.
Roles and Responsibilities:
• You'd be responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed technologies.
• You'd collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.
• You'd Assist in the definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with multiple cross-functional teams.
• Assist in the design and implementation process for new products, research and create POC for possible solutions.
Skillset:
• Bachelors or Masters Degree in a technology related field preferred.
• Overall experience of 2-3 years on the Big Data Technologies.
• Hands on experience with Spark (Java/ Scala)
• Hands on experience with Hive, Shell Scripting
• Knowledge on Hbase, Elastic Search
• Development experience In Java/ Python is preferred
• Familiar with profiling, code coverage, logging, common IDE’s and other
development tools.
• Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations.
• Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks.
• Ability to identify non-obvious solutions to complex problems
Read more
6sense

at 6sense

15 recruiters
Romesh Rawat
Posted by Romesh Rawat
Remote only
5 - 8 yrs
₹30L - ₹45L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

About Slintel (a 6sense company) :

Slintel, a 6sense company,  the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.

Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.

Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.

We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.

6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.

6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.

Linkedin (Slintel) : https://www.linkedin.com/company/slintel/">https://www.linkedin.com/company/slintel/

Industry : Software Development

Company size : 51-200 employees (189 on LinkedIn)

Headquarters : Mountain View, California

Founded : 2016

Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.

Website (Slintel) : https://www.slintel.com/slintel">https://www.slintel.com/slintel

Linkedin (6sense) : https://www.linkedin.com/company/6sense/">https://www.linkedin.com/company/6sense/

Industry : Software Development

Company size : 501-1,000 employees (937 on LinkedIn)

Headquarters : San Francisco, California

Founded : 2013

Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales

Website (6sense) : https://6sense.com/">https://6sense.com/

Acquisition News : 

https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/ 

Funding Details & News :

Slintel funding : https://www.crunchbase.com/organization/slintel">https://www.crunchbase.com/organization/slintel

6sense funding : https://www.crunchbase.com/organization/6sense">https://www.crunchbase.com/organization/6sense

https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round">https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round

https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round">https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round

https://xipometer.com/en/company/6sense">https://xipometer.com/en/company/6sense

Slintel & 6sense Customers :

https://www.featuredcustomers.com/vendor/slintel/customers

https://www.featuredcustomers.com/vendor/6sense/customers">https://www.featuredcustomers.com/vendor/6sense/customers

About the job

Responsibilities

  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems

Requirements

  • 3+ years of experience in a Data Engineer role
  • Proficiency in Linux
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
  • Must have experience with Python/ Scala
  • Must have experience with Big Data technologies like Apache Spark
  • Must have experience with Apache Airflow
  • Experience with data pipeline and ETL tools like AWS Glue
  • Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake

 

Desired Skills and Experience

Python, SQL, Scala, Spark, ETL

 

Read more
Crewscale

at Crewscale

6 recruiters
vinodh Rajamani
Posted by vinodh Rajamani
Remote only
2 - 6 yrs
₹4L - ₹40L / yr
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
ETL
Informatica
+2 more
Crewscale – Toplyne Collaboration:

The present role is a Data engineer role for Crewscale– Toplyne Collaboration.
Crewscale is exclusive partner of Toplyne.

About Crewscale:
Crewscale is a premium technology company focusing on helping companies building world
class scalable products. We are a product based start-up having a code assessment platform
which is being used top technology disrupters across the world.

Crewscale works with premium product companies (Indian and International) like - Swiggy,
ShareChat Grab, Capillary, Uber, Workspan, Ovo and many more. We are responsible for
managing infrastructure for Swiggy as well.
We focus on building only world class tech product and our USP is building technology can
handle scale from 1 million to 1 billion hits.

We invite candidates who have a zeal to develop world class products to come and work with us.

Toplyne

Who are we? 👋

Toplyne is a global SaaS product built to help revenue teams, at businesses with a self-service motion, and a large user-base, identify which users to spend time on, when and for what outcome. Think self-service or freemium-led companies like Figma, Notion, Freshworks, and Slack. We do this by helping companies recognize signals across their - product engagement, sales, billing, and marketing data.

Founded in June 2021, Toplyne is backed by marquee investors like Sequoia,Together fund and a bunch of well known angels. You can read more about us on - https://bit.ly/ForbesToplyne" target="_blank">https://bit.ly/ForbesToplyne , https://bit.ly/YourstoryToplyne" target="_blank">https://bit.ly/YourstoryToplyne.

What will you get to work on? 🏗️

  • Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.

  • Develop and implement processes and systems to supervise data quality, data mining and ensuring production data is always accurate and available for key partners and business processes that depend on it.

  • Perform data analysis required to solve data related issues and assist in the resolution of data issues.

  • Complete ownership - You’ll build highly scalable platforms and services that support rapidly growing data needs in Toplyne. There’s no instruction book, it’s yours to write. You’ll figure it out, ship it, and iterate.

What do we expect from you? 🙌🏻

  • 3-6 years of relevant work experience in a Data Engineering role.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience building and optimising data pipelines, architectures and data sets.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Strong analytic skills related to working with unstructured datasets.

  • Good understanding of Airflow, Spark, NoSql databases, Kakfa is nice to have.

Read more
hiring for a leading client

hiring for a leading client

Agency job
via Jobaajcom by Saksham Agarwal
Bengaluru (Bangalore)
1 - 3 yrs
₹12L - ₹15L / yr
Big Data
Apache Hadoop
Apache Impala
Apache Kafka
Apache Spark
+5 more
We are seeking a self motivated Software Engineer with hands-on experience to build sustainable data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.

What you'll do:

Design and development of scalable applications.
Collaborate with tech leads to get maximum understanding of underlying infrastructure.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:

Education: Bachelor/Master Degree in Computer Science
Experience: 1-3 years of relevant experience in BI/Big-Data with hands-on coding experience
Mandatory Skills

Strong in problem-solving
Good exposure to Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience of Data Engineering
Able to comprehend challenges related to Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Working knowledge of Java, python
Desired Skills

Experience with reporting tools like Tableau, QlikView
Awareness of CI-CD pipeline
Inclination to work on cloud platform ex:- AWS
Crisp communication skills with team members, Business owners.
Be able to work in a challenging, dynamic environment and meet tight deadlines
Read more
Hiring for a leading client

Hiring for a leading client

Agency job
via Jobaajcom by Saksham Agarwal
New Delhi
3 - 5 yrs
₹10L - ₹15L / yr
Big Data
Apache Kafka
Business Intelligence (BI)
Data Warehouse (DWH)
Coding
+15 more
Job Description:
Senior Software Engineer - Data Team

We are seeking a highly motivated Senior Software Engineer with hands-on experience and build scalable, extensible data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.

What you'll do:

Design and development of scalable applications.
Work with Product Management teams to get maximum value out of existing data.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:

Education: Bachelor/Master Degree in Computer Science.
Experience: 3-5 years of relevant experience in BI/DW with hands-on coding experience.

Mandatory Skills

Strong in problem-solving
Strong experience with Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience with orchestration framework like Apache oozie, Airflow
Strong experience of Data Engineering
Strong experience with Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the full software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Good knowledge of Java
Desired Skills

Experience with Python
Experience with reporting tools like Tableau, QlikView
Experience of Git and CI-CD pipeline
Awareness of cloud platform ex:- AWS
Excellent communication skills with team members, Business owners, across teams
Be able to work in a challenging, dynamic environment and meet tight deadlines
Read more
client of Merito

client of Merito

Agency job
via Merito by Merito Talent
Mumbai
3 - 8 yrs
Best in industry
skill iconPython
SQL
Tableau
PowerBI
skill iconPHP
+2 more

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
SteelEye

at SteelEye

1 video
3 recruiters
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
1 - 8 yrs
₹10L - ₹40L / yr
skill iconPython
ETL
skill iconJenkins
CI/CD
pandas
+6 more
Roles & Responsibilties
Expectations of the role
This role will be reporting into Technical Lead (Support). You will be expected to resolve bugs in the platform that are identified by Customers and Internal Teams. This role will progress towards SDE-2 in 12-15 months where the developer will be working on solving complex problems around scale and building out new features.
 
What will you do?
  • Fix issues with plugins for our Python-based ETL pipelines
  • Help with automation of standard workflow
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Responsible for any refactoring of code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
CoStrategix Technologies

at CoStrategix Technologies

1 video
1 recruiter
Jayasimha Kulkarni
Posted by Jayasimha Kulkarni
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹28L / yr
Data engineering
Data Structures
skill iconProgramming
skill iconPython
skill iconC#
+3 more

 

Job Description - Sr Azure Data Engineer

 

 

Roles & Responsibilities:

  1. Hands-on programming in C# / .Net,
  2. Develop serverless applications using Azure Function Apps.
  3. Writing complex SQL Queries, Stored procedures, and Views. 
  4. Creating Data processing pipeline(s).
  5. Develop / Manage large-scale Data Warehousing and Data processing solutions.
  6. Provide clean, usable data and recommend data efficiency, quality, and data integrity.

 

Skills

  1. Should have working experience on C# /.Net.
  2. Proficient with writing SQL queries, Stored Procedures, and Views
  3. Should have worked on Azure Cloud Stack.
  4. Should have working experience ofin developing serverless code.
  5. Must have MANDATORILY worked on Azure Data Factory.

 

Experience 

  1. 4+ years of relevant experience

 

Read more
surusha technology Pvt Ltd
subham kumar
Posted by subham kumar
Remote only
3 - 6 yrs
₹3L - ₹6L / yr
skill iconC#
SQL Azure
ETL
OLAP
SQL
+1 more
Role - Data Engineer

Skillsets-Azure, Olap, Etl, sql, python, c#

exp range - 3+ to 4 years

salary-best in industry

notice period - Currently serving notice period (Immediate joiners are preferred)

location- remote work

job type -permanent role

it is full time and totally remote based


Note: For the interview 3 rounds are there -technical round, manager/client round, hr round
Read more
Propellor.ai

at Propellor.ai

5 candid answers
1 video
Kajal Jain
Posted by Kajal Jain
Remote only
1 - 4 yrs
₹5L - ₹15L / yr
skill iconPython
SQL
Spark
Hadoop
Big Data
+2 more

Big Data Engineer/Data Engineer


What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge

• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries

These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation

At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment

What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily

Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!

To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0

About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.

Role Description

• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.

Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)  • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure

• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design

Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams

Preferred Immediate Joiners

Read more
Top Multinational Fintech Startup

Top Multinational Fintech Startup

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Upgrad KnowledgeHut

Upgrad KnowledgeHut

Agency job
Hyderabad
3 - 5 yrs
₹8L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconData Analytics
Data Visualization
PowerBI
Tableau
+3 more

AWS Data Engineer:

 

Job Description

  • 3+ years of experience in AWS Data Engineering.

  • Design and build ETL pipelines & Data lakes to automate ingestion of structured and unstructured data

  • Experience working with AWS big data technologies (Redshift, S3, AWS Glue, Kinesis, Athena ,DMS, EMR and Lambda for Serverless ETL)

  • Should have knowledge in SQL and NoSQL programming languages.

  • Have worked on batch and real time pipelines.

  • Excellent programming and debugging skills in Scala or Python & Spark.

  • Good Experience in Data Lake formation, Apache spark, python, hands on experience in deploying the models.

  • Must have experience in Production migration Process

  • Nice to have experience with Power BI visualization tools and connectivity

 

Roles & Responsibilities:

  • Design, build and operationalize large scale enterprise data solutions and applications

  • Analyze, re-architect and re-platform on premise data warehouses to data platforms on AWS cloud.

  • Design and build production data pipelines from ingestion to consumption within AWS big data architecture, using Python, or Scala.

  • Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.

 

Read more
Unique Data Solutions Provider

Unique Data Solutions Provider

Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹35L / yr
Architecture
Technical Architecture
Solution architecture
Information architecture
Java Architecture for XML Binding (JAXBJava Architecture for XML Binding...
+9 more
Minimum 5 years of data engineering and/or cloud data management experience
• Ability to understand customer requirements and create customized demonstrations and
collateral
• Provide product feedback (feature requests, user experience) to the development team
• Strong foundation in system level architectures and compute, storage and networking
infrastructure, specifically:
• Compute architectures – physical and virtualized, operating systems (Linux strongly
preferred)
• Storage systems – file systems, object stores
• On-prem data center and public cloud (AWS, Azure, Google Cloud) environments
• Hands-on experience with Linux/Unix systems as a system administrator or equivalent role
involving installing software and security patches, installing hardware components on servers as
per product manuals etc.
• Hands-on experience working with public cloud infrastructure and services. Cloud certifications
are preferred.
• Basic understanding of enterprise system deployment architecture around network configuration,
security related settings etc.
• Experience troubleshooting configuration issues to resolve them independently or in collaboration
with customer support teams.
• Be able to work with development/L3 support teams to live debug any issues for swift resolution
• Experience with programming or scripting languages such as Python, JAVA, GO is preferred.
• Experience with data management, DevOps, micro-services, containerization
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Noida
5 - 9 yrs
₹10L - ₹17L / yr
Data engineering
Spark
skill iconScala
Hadoop
Apache Hadoop
+1 more
  • We are looking for : Data engineer
  • Sprak
  • Scala
  • Hadoop
Exp - 5 to 9 years
N.p - 15 days to 30 Days
Location : Bangalore / Noida
Read more
Mobile Programming India Pvt Ltd

at Mobile Programming India Pvt Ltd

1 video
17 recruiters
Pawan Tiwari
Posted by Pawan Tiwari
Remote, Bengaluru (Bangalore), Chennai, Pune, Gurugram, Mohali, Dehradun
4 - 7 yrs
₹10L - ₹15L / yr
Data engineering
Data Engineer
skill iconDjango
skill iconPython

Looking Data Enginner for our OWn organization-

Notice Period- 15-30 days
CTC- upto 15 lpa

 

Preferred Technical Expertise 

  1. Expertise in Python programming.
  2. Proficient in Pandas/Numpy Libraries. 
  3. Experience with Django framework and API Development.
  4. Proficient in writing complex queries using SQL
  5. Hands on experience with Apache Airflow.
  6. Experience with source code versioning tools such as GIT, Bitbucket etc.

 Good to have Skills:

  1. Create and maintain Optimal Data Pipeline Architecture
  2. Experienced in handling large structured data.
  3. Demonstrated ability in solutions covering data ingestion, data cleansing, ETL, Data mart creation and exposing data for consumers.
  4. Experience with any cloud platform (GCP is a plus)
  5. Experience with JQuery, HTML, Javascript, CSS is a plus.
If Intersted , Kindly share Your CV
Read more
NSEIT

at NSEIT

4 recruiters
Vishal Pednekar
Posted by Vishal Pednekar
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
Data engineering
Big Data
Data Engineer
skill iconAmazon Web Services (AWS)
NOSQL Databases
+1 more
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Experience Apache Spark Programming with Databricks
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort