Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
Cloudesign Technology Solutions
Remote only
5 - 13 yrs
₹15L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+8 more

Job Description: Data Engineer

Location: Remote

Experience Required: 6 to 12 years in Data Engineering

Employment Type: [Full-time]

Notice: Looking for candidates, Who can join immediately or 15days Max

 

About the Role:

We are looking for a highly skilled Data Engineer with extensive experience in Python, Databricks, and Azure services. The ideal candidate will have a strong background in building and optimizing ETL processes, managing large-scale data infrastructures, and implementing data transformation and modeling tasks.

 

Key Responsibilities:

ETL Development:

Use Python as an ETL tool to read data from various sources, perform data type transformations, handle errors, implement logging mechanisms, and load data into Databricks-managed delta tables.

Develop robust data pipelines to support analytics and reporting needs.

Data Transformation & Optimization:

Perform data transformations and evaluations within Databricks.

Work on optimizing data workflows for performance and scalability.

Azure Expertise:

Implement and manage Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.

Coding & Development:

Utilize Python for complex tasks involving classes, objects, methods, dictionaries, loops, packages, wheel files, and database connectivity.

Write scalable and maintainable code to manage streaming and batch data processing.

Cloud & Infrastructure Management:

Leverage Spark, Scala, and cloud-based solutions to design and maintain large-scale data infrastructures.

Work with cloud data warehouses, data lakes, and storage formats.

Project Leadership:

Lead data engineering projects and collaborate with cross-functional teams to deliver solutions on time.


Required Skills & Qualifications:

Technical Proficiency:

  • Expertise in Python for ETL and data pipeline development.
  • Strong experience with Databricks and Apache Spark.
  • Proven skills in handling Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.


Experience & Knowledge:

  • Minimum 6+ years of experience in data engineering.
  • Solid understanding of data modeling, ETL processes, and optimizing data pipelines.
  • Familiarity with Unix shell scripting and scheduling tools.

Other Skills:

  • Knowledge of cloud warehouses and storage formats.
  • Experience in handling large-scale data infrastructures and streaming data.

 

Preferred Qualifications:

  • Proven experience with Spark and Scala for big data processing.
  • Prior experience in leading or mentoring data engineering teams.
  • Hands-on experience with end-to-end project lifecycle in data engineering.

 

What We Offer:

  • Opportunity to work on challenging and impactful data projects.
  • A collaborative and innovative work environment.
  • Competitive compensation and benefits.

 

How to Apply:

https://cloudesign.keka.com/careers/jobdetails/73555

Read more
ScatterPie Analytics
Akshada Desai
Posted by Akshada Desai
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹16L / yr
Engineering Management
ETL
SQL

Skills: ETL+ SQL

· Experience with SQL and data querying languages.

· Knowledge of data governance frameworks and best practices.

· Familiarity with programming/scripting languages (e.g., SparkSQL)

· Strong understanding of data integration techniques and ETL processes.

· Experience with data quality tools and methodologies.

· Strong communication and problem-solving skills


Detailed JD: Data Integration: Manage the seamless integration various data lake, ensuring that jobs are running as expected, validate the data ingested , track the DQ checks , rerun/reprocess the jobs in case of failures post figuring out the RCAs

Data Quality Assurance: Monitor and validate data quality during and after the migration process, implementing checks and corrective actions as needed.

Documentation: Maintain comprehensive documentation related to data issues encountered during the weekly/monthly processing and operational procedures.

Continuous Improvement: Recommend and implement improvements to data processing, tools, and technologies to enhance efficiency and effectiveness.

Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore), Pune, Chennai
4 - 8 yrs
₹8L - ₹16L / yr
Ab Initio
ETL
skill iconPython
SQL

Ab Initio Developer

 

About the Role:

We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.

Key Responsibilities:

·      Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.

·      Analyze complex data requirements and translate them into effective Ab Initio designs.

·      Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.

·      Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.

·      Optimize performance and scalability of Ab Initio jobs.

·      Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.

·      Stay up-to-date with the latest Ab Initio technologies and industry best practices.

Required Skills and Experience:

·      2.5 to 8 years of hands-on experience in Ab Initio development.

·      Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.

·      Proficiency in Ab Initio's graphical interface and command-line tools.

·      Experience in data modeling, data warehousing, and ETL concepts.

·      Strong SQL skills and experience with relational databases.

·      Excellent problem-solving and analytical skills.

·      Ability to work independently and as part of a team.

·      Strong communication and documentation skills.

Preferred Skills:

·      Experience with cloud-based data integration platforms.

·      Knowledge of data quality and data governance concepts.

·      Experience with scripting languages (e.g., Python, Shell scripting).

·      Certification in Ab Initio or related technologies.


Read more
Fatakpay

at Fatakpay

2 recruiters
Disha Gajra
Posted by Disha Gajra
Andheri east mumbai
2 - 4 yrs
₹8L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Job Title: Data Analyst-Fintech

Job Description:

We are seeking a highly motivated and detail-oriented Data Analyst with 2 to 4 years of work experience to join our team. The ideal candidate will have a strong analytical mindset, excellent problem-solving skills, and a passion for transforming data into actionable insights. In this role, you will play a pivotal role in gathering, analyzing, and interpreting data to support informed decision-making and drive business growth.

Key Responsibilities:

1.      Data Collection and Extraction:

§ Gather data from various sources, including databases, spreadsheets and APIs,

§ Perform data cleansing and validation to ensure data accuracy and integrity.

2.      Data Analysis:

§ Analyze large datasets to identify trends, patterns, and anomalies.

§ Conduct analysis and data modeling to generate insights and forecasts.

§ Create data visualizations and reports to present findings to stakeholders.

3.      Data Interpretation and Insight Generation:

§ Translate data insights into actionable recommendations for business improvements.

§ Collaborate with cross-functional teams to understand data requirements and provide data-driven solutions.

4.      Data Quality Assurance:

§ Implement data quality checks and validation processes to ensure data accuracy and consistency.

§ Identify and address data quality issues promptly.

Qualifications:

1.      Bachelor's degree in a relevant field such as Computer Science, Statistics, Mathematics, or a related discipline.

2.      Proven work experience as a Data Analyst, with 2 to 4 years of relevant experience.

3.      Knowledge of data warehousing concepts and ETL processes is advantageous.

4.      Proficiency in data analysis tools and languages (e.g., SQL, Python, R).

5.      Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.

6.      Strong analytical and problem-solving skills.

7.      Excellent communication and presentation skills.

8.      Attention to detail and a commitment to data accuracy.

9.      Familiarity with machine learning and predictive modeling is a bonus.


If you are a data-driven professional with a passion for uncovering insights from complex datasets and have the qualifications and skills mentioned above, we encourage you to apply for this Data Analyst position. Join our dynamic team and contribute to making data-driven decisions that will shape our company's future.

Fatakpay is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.


Read more
Cornertree

at Cornertree

1 recruiter
Deepesh Shrimal
Posted by Deepesh Shrimal
Pune, Mumbai
3 - 10 yrs
₹5L - ₹45L / yr
Duck Creek
data insight
SQL Server Reporting Services (SSRS)
SQL
ETL
  • Bachelor's degree required, or higher education level, or foreign equivalent, preferably in area wit
  • At least 5 years experience in Duck Creek Data Insights as Technical Architect/Senior Developer.
  • Strong Technical knowledge on SQL databases, MSBI.
  • Should have strong hands-on knowledge on Duck Creek Insight product, SQL Server/DB level configuration, T-SQL, XSL/XSLT, MSBI etc
  • Well versed with Duck Creek Extract Mapper Architecture
  • Strong understanding of Data Modelling, Data Warehousing, Data Marts, Business Intelligence with ability to solve business problems
  • Strong understanding of ETL and EDW toolsets on the Duck Creek Data Insights
  • Strong knowledge on Duck Creek Insight product overall architecture flow, Data hub, Extract mapper etc
  • Understanding of data related to business application areas policy, billing, and claims business solutions
  • Minimum 4 to 7 year working experience on Duck Creek Insights product
  • Strong Technical knowledge on SQL databases, MSBI
  • Preferable having experience in Insurance domain
  • Preferable experience in Duck Creek Data Insights
  • Experience specific to Duck Creek would be an added advantage
  • Strong knowledge of database structure systems and data mining
  • Excellent organisational and analytical abilities
  • Outstanding problem solver


Read more
Fatakpay

at Fatakpay

2 recruiters
Disha Gajra
Posted by Disha Gajra
Andheri east mumbai, Mumbai
2 - 4 yrs
₹8L - ₹14L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more

Job Description: Data Engineer (Fintech Firm)

Position: Data Engineer

Experience: 2-4 Years

Location: Mumbai-Andheri

Employment Type: Full-Time

About Us:

We are a dynamic fintech firm dedicated to revolutionizing the financial services industry through innovative data solutions. We believe in leveraging cutting-edge technology to provide superior financial products and services to our clients. Join our team and be a part of this exciting journey.

Job Overview:

We are looking for a skilled Data Engineer with 3-5 years of experience to join our data team. The ideal candidate will have a strong background in ETL processes, data pipeline creation, and database management. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data systems and pipelines.

Key Responsibilities:

  • Design and develop robust and scalable ETL processes to ingest and process large datasets from various sources.
  • Build and maintain efficient data pipelines to support real-time and batch data processing.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Optimize database performance and ensure data integrity and security.
  • Troubleshoot and resolve data-related issues and provide support for data operations.
  • Implement data quality checks and monitor data pipeline performance.
  • Document technical solutions and processes for future reference.

Required Skills and Qualifications:

  • Bachelor's degree in  Engineering, or a related field.
  • 3-5 years of experience in data engineering or a related role.
  • Strong proficiency in ETL tools and techniques.
  • Experience with SQL and relational databases (e.g., MySQL, PostgreSQL).
  • Familiarity with big data technologies
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Knowledge of data warehousing concepts and tools 
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Preferred Qualifications:

  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Knowledge of machine learning and data science principles.
  • Experience with real-time data processing and streaming platforms (e.g., Kafka).

What We Offer:

  • Competitive compensation package (12-20 LPA) based on experience and qualifications.
  • Opportunity to work with a talented and innovative team in the fintech industry..
  • Professional development and growth opportunities.




Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Remote, Chennai
4 - 8 yrs
₹2L - ₹4L / yr
skill iconBootstrap
skill iconHTML/CSS
Data Visualization
ETL

We are seeking a skilled Qlik Developer with 4-5 years of experience in Qlik development to join our team. The ideal candidate will have expertise in QlikView and Qlik Sense, along with strong communication skills for interacting with business stakeholders. Knowledge of other BI tools such as Power BI and Tableau is a plus.


Must-Have Skills:


QlikView and Qlik Sense Development: 4-5 years of hands-on experience in developing and maintaining QlikView/Qlik Sense applications and dashboards.

Data Visualization: Proficiency in creating interactive reports and dashboards, with a deep understanding of data storytelling.

ETL (Extract, Transform, Load): Experience in data extraction from multiple data sources (databases, flat files, APIs) and transforming it into actionable insights.

Qlik Scripting: Knowledge of Qlik scripting, set analysis, and expressions to create efficient solutions.

Data Modeling: Expertise in designing and implementing data models for reporting and analytics.

Stakeholder Communication: Strong communication skills to collaborate with non-technical business users and translate their requirements into effective BI solutions.

Troubleshooting and Support: Ability to identify, troubleshoot, and resolve issues related to Qlik applications.


Nice-to-Have Skills:


Other BI Tools: Experience in using other business intelligence tools such as Power BI and Tableau.

SQL & Data Querying: Familiarity with SQL for data querying and database management.

Cloud Platforms: Experience with cloud services like Azure, AWS, or Google Cloud in relation to BI and data solutions.

Programming Knowledge: Exposure to programming languages like Python or R.

Agile Methodologies: Understanding of Agile frameworks for project delivery.

Read more
Smartavya Analytica

Smartavya Analytica

Agency job
via Pluginlive by Joslyn Gomes
Mumbai
10 - 15 yrs
₹15L - ₹25L / yr
Datawarehousing
Data Warehouse (DWH)
ETL
Data Visualization
Big Data
+7 more

Experience: 12-15 Years with 7 years in Big Data, Cloud, and Analytics. 

Key Responsibilities: 

  • Technical Project Management:
  • o Lead the end-to-end technical delivery of multiple projects in Big Data, Cloud, and Analytics. Lead teams in technical solutioning, design and development
  • o Develop detailed project plans, timelines, and budgets, ensuring alignment with client expectations and business goals.
  • o Monitor project progress, manage risks, and implement corrective actions as needed to ensure timely and quality delivery.
  • Client Engagement and Stakeholder Management:
  • o Build and maintain strong client relationships, acting as the primary point of contact for project delivery.
  • o Understand client requirements, anticipate challenges, and provide proactive solutions.
  • o Coordinate with internal and external stakeholders to ensure seamless project execution.
  • o Communicate project status, risks, and issues to senior management and stakeholders in a clear and timely manner.
  • Team Leadership:
  • o Lead and mentor a team of data engineers, analysts, and project managers.
  • o Ensure effective resource allocation and utilization across projects.
  • o Foster a culture of collaboration, continuous improvement, and innovation within the team.
  • Technical and Delivery Excellence:
  • o Leverage Data Management Expertise and Experience to guide and lead the technical conversations effectively. Identify and understand technical areas of support needed to the team and work towards resolving them – either by own expertise or networking with internal and external stakeholders to unblock the team
  • o Implement best practices in project management, delivery, and quality assurance.
  • o Drive continuous improvement initiatives to enhance delivery efficiency and client satisfaction.
  • o Stay updated with the latest trends and advancements in Big Data, Cloud, and Analytics technologies.

Requirements:

  • Experience in IT delivery management, particularly in Big Data, Cloud, and Analytics.
  • Strong knowledge of project management methodologies and tools (e.g., Agile, Scrum, PMP).
  • Excellent leadership, communication, and stakeholder management skills.
  • Proven ability to manage large, complex projects with multiple stakeholders.
  • Strong critical thinking skills and the ability to make decisions under pressure.

Preferred Qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field.
  • Relevant certifications in Big Data, Cloud platforms like GCP, Azure, AWS, Snowflake, Databricks, Project Management or similar areas is preferred.
Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Disha Karia
Posted by Disha Karia
Pune
5 - 10 yrs
₹8L - ₹18L / yr
ETL
IBM InfoSphere DataStage
SQL

Job Title: Senior ETL Developer (DataStage and SQL)

Location: Pune

Overview:

We’re looking for a Senior ETL Developer with 5+ years of experience in ETL development, strong DataStage and SQL skills, and a track record in complex data integration projects.

Responsibilities:

  • Develop and maintain ETL processes using IBM DataStage and SQL for data warehousing.
  • Write advanced SQL queries to transform and validate data.
  • Troubleshoot ETL jobs, optimize performance, and ensure data quality.
  • Document ETL workflows and adhere to coding standards.
  • Lead and mentor junior developers, providing technical guidance.
  • Collaborate with architects and analysts to deliver scalable solutions.

Qualifications:

  • 5+ years in ETL development; 5+ years with IBM DataStage.
  • Advanced SQL skills and experience with relational databases.
  • Strong understanding of data warehousing and data integration.
  • Experience in performance tuning and ETL process optimization.
  • Team player with leadership abilities and excellent problem-solving skills.


Read more
Affine
Rishika Chadha
Posted by Rishika Chadha
Remote only
5 - 8 yrs
Best in industry
skill iconScala
ETL
Apache Kafka
Object Oriented Programming (OOPs)
CI/CD
+4 more

Role Objective:


Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles & Responsibilities:

  • Sound knowledge in Spark architecture and distributed computing and Spark streaming.
  • Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
  • SFDC(Data modelling experience) would be given preference
  • Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
  • Good in functional programming and OOPS concept on Scala
  • Good experience in SQL – should be able to write complex queries.
  • Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
  • Able to mentor new members for onboarding to the project.
  • Understand the client requirement and able to design, develop from scratch and deliver.
  • AWS cloud experience would be preferable.
  • Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
  • Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
  • Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
  • Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
  • Managing project timing, client expectations and meeting deadlines.
  • Should have played project and team management roles.
  • Facilitate meetings within the team on regular basis.
  • Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
  • Optimization, maintenance, and support of pipelines.
  • Strong analytical and logical skills.
  • Ability to comfortably tackling new challenges and learn
Read more
Incubyte

at Incubyte

4 recruiters
SHUBHEE JAIN
Posted by SHUBHEE JAIN
Remote only
7 - 10 yrs
₹7L - ₹40L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconJava
skill iconPython

Who We Are 🌟

 

We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT’. Embracing Software Craftsmanship values and eXtreme Programming Practices, we create well-crafted products for our clients. We partner with large organizations to help modernize their legacy code bases and work with startups to launch MVPs, scale or as extensions of their team to efficiently operationalize their ideas. We love to work with folks who are passionate about creating exceptional software, are continuous learners, and are painstakingly fussy about quality. 

 

Our Values 💡

 

Relentless Pursuit of Quality with Pragmatism

Extreme Ownership

Proactive Collaboration

Active Pursuit of Mastery

Effective Feedback

Client Success 

 

What We’re Looking For 👀

 

We’re looking to hire software craftspeople and data engineers. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle including infrastructure technologies in the cloud

 

 

What You’ll Be Doing 💻

 

Collaborate with teams across the organization, including product managers, data engineers and business leaders to translate requirements into software solutions to process large amounts of data.

  • Develop new ways to ensure ETL and data processes are running efficiently.
  • Write clean, maintainable, and reusable code that adheres to best practices and coding standards.
  • Conduct thorough code reviews and provide constructive feedback to ensure high-quality codebase.
  • Optimize software performance and ensure scalability and reliability.
  • Stay up-to-date with the latest trends and advancements in data processing and ETL development and apply them to enhance our products.
  • Meet with product owners and other stakeholders weekly to discuss priorities and project requirements.
  • Ensure deployment of new code is tested thoroughly and has business sign off from stakeholders as well as senior leadership.
  • Handle all incoming support requests and errors in a timely manner and within the necessary time frame and timezone commitments to the business.

 

Location : Remote

 

Skills you need in order to succeed in this role

 

What you will bring:

 

  • 7+ years of experience with Java 11+(required), managing and working in Maven projects
  • 2+ years of experience with Python (required)
  • Knowledge and understanding of complex data pipelines utilizing ETL processes (required)
  • 4+ years of experience using relational databases and deep knowledge of SQL with the ability to understand complex data relationships and transformations (required)
  • Knowledge and understanding of Git (required)
  • 3+ year of experience with various GCP technologies
  • Google Dataflow (Apache Beam SDK) (equivalent Hadoop technologies)
  • BigQuery (equivalent of any data warehouse technologies: Snowflake, Azure DW, Redshift)
  • Cloud Storage Buckets (equivalent to S3)
  • GCloud CLI
  • Experience with Apache Airflow / Google Composer
  • Knowledge and understanding of Docker, Linux, Shell/Bash and virtualization technologies
  • Knowledge and understanding of CI/CD methodologies
  • Ability to understand and build UML diagrams to showcase complex logic
  • Experience with various organization/code tools such as Jira, Confluence and GitHub

 

Bonus Points for Tech Enthusiasts:

  • Infrastructure as Code technologies (Pulumi, Terraform, CloudFormation)
  • Experience with observability and logging platforms (DataDog)
  • Experience with DBT or similar technologies


Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
4 - 7 yrs
₹5L - ₹16L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more

Responsibility :

  • Proficient in SQL
  • 4+ yrs experience
  • Banking domain expertise 
  • Excellent data analysis and problem-solving skills
  • Attention to data details and accuracy 
  • ETL knowledge and experience 
  • Identifies, creates, and analyzes data, information, and reports to make recommendations and enhance organizational capability.
  • Excellent Communication skills
  • Experience in using Business Analysis tools and techniques
  • Knowledge and understanding of various Business Analysis methodologies
  • Attention to detail and problem-solving skills
  • NP : Immediate joiner 


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Hyderabad
5 - 10 yrs
Best in industry
SQL
skill iconAmazon Web Services (AWS)
ETL
Windows Azure
Snow flake schema

Responsibilities include:  

  • Develop and maintain data validation logic in our proprietary Control Framework tool 
  • Actively participate in business requirement elaboration and functional design sessions to develop an understanding of our Operational teams’ analytical needs, key data flows and sources 
  • Assist Operational teams in the buildout of Checklists and event monitoring workflows within our Enterprise Control Framework platform 
  • Build effective working relationships with Operational users, Reporting and IT development teams and business partners across the organization 
  • Conduct interviews, generate user stories, develop scenarios and workflow analyses  
  • Contribute to the definition of reporting solutions that empower Operational teams to make immediate decisions as to the best course of action 
  • Perform some business user acceptance testing 
  • Provide production support and troubleshooting for existing operational dashboards  
  • Conduct regular demos and training of new features for the stakeholder community 



Qualifications  

  • Bachelor’s degree or equivalent in Business, Accounting, Finance, MIS, Information Technology or related field of study 
  • Minimum 5 years’ of SQL required 
  • Experience querying data on cloud platforms (AWS/ Azure/ Snowflake) required 
  • Exceptional problem solving and analytical skills, attention to detail and organization 
  • Able to independently troubleshoot and gather supporting evidence  
  • Prior experience developing within a BI reporting tool (e.g. Spotfire, Tableau, Looker, Information Builders) a plus  
  • Database Management and ETL development experience a plus 
  • Self-motivated, self-assured, and self-managed  
  • Able to multi-task to meet time-driven goals  
  • Asset management experience, including investment operation a plus 



Read more
Pune, Hybrid
4 - 8 yrs
₹10L - ₹25L / yr
SAP
Data migration
ETL

Job Description:


We are currently seeking a talented and experienced SAP SF Data Migration Specialist to join our team and drive the successful migration of SAP S/4 from SAP ECC.


As the SAP SF Data Migration Specialist, you will play a crucial role in overseeing the design, development, and implementation of data solutions within our SAP SF environment. You will collaborate closely with cross-functional teams to ensure data integrity, accuracy, and usability to support business processes and decision-making. 



About the Company:


We are a dynamic and innovative company committed to delivering exceptional solutions that empower our clients to succeed. With our headquarters in the UK and a global footprint across the US, Noida, and Pune in India, we bring a decade of expertise to every endeavour, driving real results. We take a holistic approach to project delivery, providing end-to-end services that encompass everything from initial discovery and design to implementation, change management, and ongoing support. Our goal is to help clients leverage the full potential of the Salesforce platform to achieve their business objectives.



What Makes VE3 The Best For You We think of your family as our family, no matter the shape or size. We offer maternity leaves, PF Fund Contributions, 5 days working week along with a generous paid time off program that benefits balance your work & personal life.


Requirements

Responsibilities:

  • Lead the design and implementation of data migration strategies and solutions within SAP SF environments.
  • Develop and maintain data migration plans, ensuring alignment with project timelines and objectives.
  • Collaborate with business stakeholders to gather and analyse data requirements, ensuring alignment with business needs and objectives.
  • Design and implement data models, schemas, and architectures to support SAP data structures and functionalities.
  • Lead data profiling and analysis activities to identify data quality issues, gaps, and opportunities for improvement.
  • Define data transformation rules and processes to ensure data consistency, integrity, and compliance with business rules and regulations.
  • Manage data cleansing, enrichment, and standardization efforts to improve data quality and usability.
  • Coordinate with technical teams to implement data migration scripts, ETL processes, and data loading mechanisms.
  • Develop and maintain data governance policies, standards, and procedures to ensure data integrity, security, and privacy.
  • Lead data testing and validation activities to ensure accuracy and completeness of migrated data.
  • Provide guidance and support to project teams, including training, mentoring, and knowledge sharing on SAP data best practices and methodologies.
  • Stay current with SAP data management trends, technologies, and best practices, and recommend innovative solutions to enhance data capabilities and performance.

Requirements:

  • Bachelor’s degree in computer science, Information Systems, or related field; master’s degree preferred.
  • 10+ years of experience in SAP and Non-SAP data management, with a focus on data migration, data modelling, and data governance.
  • Have demonstrable experience as an SAP Data Consultant, ideally working across SAP SuccessFactors and non-SAP systems
  • Highly knowledgeable and experienced in managing HR data migration projects in SAP SuccessFactors environments
  • Demonstrate knowledge of how data aspects need to be considered within overall SAP solution design
  • Manage the workstream activities and plan, including stakeholder management, engagement with the business and the production of governance documentation.
  • Proven track record of leading successful SAP data migration projects from conception to completion.
  • Excellent analytical, problem-solving, and communication skills, with the ability to collaborate effectively with cross-functional teams.
  • Experience with SAP Activate methodologies preferred.
  • SAP certifications in data management or related areas are a plus.
  • Ability to work independently and thrive in a fast-paced, dynamic environment.
  • Lead the data migration workstream, with a direct team of circa 5 resources in addition to other 3rd party and client resource.
  • Work flexibly and remotely. Occasional UK travel will be required


Benefits

  • Competitive salary and comprehensive benefits package.
  • Opportunity to work in a dynamic and challenging environment on critical migration projects.
  • Professional growth opportunities in a supportive and forward-thinking organization.
  • Engagement with cutting-edge SAP technologies and methodologies in data migration.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Pune
5 - 10 yrs
Best in industry
ETL
SQL
Snow flake schema
Data Warehouse (DWH)

Job Description for QA Engineer:

  • 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus


Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution


Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Hyderabad
4 - 7 yrs
₹8L - ₹14L / yr
ETL
JIRA
SSIS
SQL Server Integration Services (SSIS)
HP ALM
+2 more


Job Summary:

We are looking for an experienced ETL Tester with 5 to 7 years of experience and expertise

in the banking domain. The candidate will be responsible for testing ETL processes,

ensuring data quality, and validating data flows in large-scale projects.


Key Responsibilities:


 Design and execute ETL test cases, ensuring data integrity and accuracy.

 Perform data validation using complex SQL queries.

 Collaborate with business analysts to define testing requirements.

 Track defects and work with developers to resolve issues.

 Conduct performance testing for ETL processes.

 Banking Domain Knowledge: Strong understanding of banking processes such as

payments, loans, credit, accounts, and regulatory reporting.


Required Skills:


 5-7 years of ETL testing experience.

 Strong SQL skills and experience with ETL tools (Informatica, SSIS, etc.).

 Knowledge of banking domain processes.

 Experience with test management tools (JIRA, HP ALM).

 Familiarity with Agile methodologies.


Location – Hyderabad

Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
5 - 7 yrs
₹5L - ₹20L / yr
ETL

Job Description:

Responsibilities:

  • Design, develop, and implement ETL processes using IBM DataStage.
  • Collaborate with business analysts and data architects to understand data requirements and translate them into ETL solutions.
  • Develop and maintain data integration workflows to ensure data accuracy and integrity.
  • Optimize ETL processes for performance and scalability.
  • Troubleshoot and resolve data issues and ETL job failures.
  • Document ETL processes, data flows, and mappings.
  • Ensure compliance with data governance and security policies.
  • Participate in code reviews and provide constructive feedback to team members.
  • Stay updated with the latest trends and best practices in ETL and data integration technologies.

Requirements:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Proven experience as an ETL Developer with expertise in IBM DataStage.
  • Strong knowledge of SQL and database management systems (e.g., Oracle, SQL Server, DB2).
  • Experience with data modeling and data warehousing concepts.
  • Familiarity with data quality and data governance principles.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and teamwork abilities.

Location : Pune

Read more
Remote only
4 - 5 yrs
₹9.6L - ₹12L / yr
SQL
RESTful APIs
skill iconPython
pandas
ETL

We are seeking a Data Engineer ( Snowflake, Bigquery, Redshift) to join our team. In this role, you will be responsible for the development and maintenance of fault-tolerant pipelines, including multiple database systems.


Responsibilities:

  • Collaborate with engineering teams to create REST API-based pipelines for large-scale MarTech systems, optimizing for performance and reliability.
  • Develop comprehensive data quality testing procedures to ensure the integrity and accuracy of data across all pipelines.
  • Build scalable dbt models and configuration files, leveraging best practices for efficient data transformation and analysis.
  • Partner with lead data engineers in designing scalable data models.
  • Conduct thorough debugging and root cause analysis for complex data pipeline issues, implementing effective solutions and optimizations.
  • Follow and adhere to group's standards such as SLAs, code styles, and deployment processes.
  • Anticipate breaking changes to implement backwards compatibility strategies regarding API schema changesAssist the team in monitoring pipeline health via observability tools and metrics.
  • Participate in refactoring efforts as platform application needs evolve over time.


Requirements:

  • Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field.
  • 3+ years of professional experience with a cloud database such as Snowflake, Bigquery, Redshift.
  • +1 years of professional experience with dbt (cloud or core).
  • Exposure to various data processing technologies such as OLAP and OLTP and their applications in real-world scenarios.
  • Exposure to work cross-functionally with other teams such as Product, Customer Success, Platform Engineering.
  • Familiarity with orchestration tools such as Dagster/Airflow.
  • Familiarity with ETL/ELT tools such as dltHub/Meltano/Airbyte/Fivetran and DBT.
  • High intermediate to advanced SQL skills (comfort with CTEs, window functions).
  • Proficiency with Python and related libraries (e.g., pandas, sqlalchemy, psycopg2) for data manipulation, analysis, and automation.


Benefits:

  • Work Location: Remote
  • 5 days working​


You can apply directly through the link:https://zrec.in/e9578?source=CareerSite


Explore our Career Page for more such jobs : careers.infraveo.com



Read more
Onepoint IT Consulting Pvt Ltd
Lalitha Goparaju
Posted by Lalitha Goparaju
Pune
0 - 1 yrs
₹2L - ₹4L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL

We will invite candidates for the selection process who meet the following criteria:


- Graduates/post graduates from the computer stream only (16 years of education – 10+2+4 or 10+3+3) having passed out in 2023/24

- 65% minimum marks in all semesters and cleared inonego

- Excellent communication skills - written and verbal in English


Further details on salary, benefits, location and working hours:


- Compensation: Rs. 20,000/- (inclusive of PF) stipend for first 3 months while on training and on successful completion Rs. 4LPA.

-Location: Pune

- Working hours: UK timing (8am – 5pm).

-Health insurance coverage of Rs 5L will be provided for the duration of your employment


Selection process


The selection process will consist of an aptitude assessment (1.5 hr), followed by a technical round (Java and SQL - MCQ) test (20 min) and a Java programming assignment. After clearing all the above rounds, the next round will be a personal interview.


We request that you only reply to this email message if you meet the selection criteria and agree with the terms and conditions. Please also mention which position you are applying for.


We will also ask you to answer the following screening questions if you wish to apply for any of these open vacancies.


Why shouldOnepointconsider you for an interview?

Which values are important for you at the workplace and why?

Where would you like to be in 2 to 3 years’ time in terms of your career?


Read more
MathCo
Nabhan Mustafa
Posted by Nabhan Mustafa
Bengaluru (Bangalore)
2 - 8 yrs
Best in industry
Data Warehouse (DWH)
Microsoft Windows Azure
Data engineering
skill iconPython
skill iconAmazon Web Services (AWS)
+2 more
  • Responsible for designing, storing, processing, and maintaining of large-scale data and related infrastructure.
  • Can drive multiple projects both from operational and technical standpoint.
  • Ideate and build PoV or PoC for new product that can help drive more business.
  • Responsible for defining, designing, and implementing data engineering best practices, strategies, and solutions.
  • Is an Architect who can guide the customers, team, and overall organization on tools, technologies, and best practices around data engineering.
  • Lead architecture discussions, align with business needs, security, and best practices.
  • Has strong conceptual understanding of Data Warehousing and ETL, Data Governance and Security, Cloud Computing, and Batch & Real Time data processing
  • Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
  • Understanding of Medallion architecture pattern
  • Has worked on at least one cloud platform.
  • Has worked as data architect and executed multiple end-end data engineering project.
  • Has extensive knowledge of different data architecture designs and data modelling concepts.
  • Manages conversation with the client stakeholders to understand the requirement and translate it into technical outcomes.


Required Tech Stack

 

  • Strong proficiency in SQL
  • Experience working on any of the three major cloud platforms i.e., AWS/Azure/GCP
  • Working knowledge of an ETL and/or orchestration tools like IICS, Talend, Matillion, Airflow, Azure Data Factory, AWS Glue, GCP Composer, etc.
  • Working knowledge of one or more OLTP databases (Postgres, MySQL, SQL Server, etc.)
  • Working knowledge of one or more Data Warehouse like Snowflake, Redshift, Azure Synapse, Hive, Big Query, etc.
  • Proficient in at least one programming language used in data engineering, such as Python (or Scala/Rust/Java)
  • Has strong execution knowledge of Data Modeling (star schema, snowflake schema, fact vs dimension tables)
  • Proficient in Spark and related applications like Databricks, GCP DataProc, AWS Glue, EMR, etc.
  • Has worked on Kafka and real-time streaming.
  • Has strong execution knowledge of data architecture design patterns (lambda vs kappa architecture, data harmonization, customer data platforms, etc.)
  • Has worked on code and SQL query optimization.
  • Strong knowledge of version control systems like Git to manage source code repositories and designing CI/CD pipelines for continuous delivery.
  • Has worked on data and networking security (RBAC, secret management, key vaults, vnets, subnets, certificates)
Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 4 yrs
₹8L - ₹20L / yr
skill iconPython
PySpark
ETL
databricks
Azure
+6 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe. 

 

 

We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English. 

 

 

We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives. 

 

 

Skills Required 

  • Experience in the manufacturing industry (metal industry is a plus)  
  • 2+ years of experience as a Data Engineer 
  • Experience in data cleaning & structuring and data manipulation 
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines. 
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation. 
  • Experience in SQL and data structures  
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases. 
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform. 
  • Proficient in data management and data governance  
  • Strong analytical and problem-solving skills. 
  • Excellent communication and teamwork abilities. 

 


Nice To Have 

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database). 
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud. 


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Pune
8 - 15 yrs
₹20L - ₹25L / yr
skill iconPython
CI/CD
Systems Development Life Cycle (SDLC)
ETL
JIRA
+5 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.



Requirements:

  • Python Experience: Minimum 3+ years.
  • Software Development Experience: Minimum 8+ years.
  • Data Engineering and ETL Workloads: Minimum 2+ years.
  • Familiarity with Software Development Life Cycle (SDLC).
  • CI/CD Pipeline Development: Experience in developing CI/CD pipelines for large projects.
  • Agile Framework & Sprint Methodology: Experience with Jira.
  • Source Version Control: Experience with GitHub or similar SVC.
  • Team Leadership: Experience leading a team of software developers/data scientists.

Good to Have:

  • Experience with Golang.
  • DevOps/Cloud Experience (preferably AWS).
  • Experience with React and TypeScript.

Responsibilities:

  • Mentor and train a team of data scientists and software developers.
  • Lead and guide the team in best practices for software development and data engineering.
  • Develop and implement CI/CD pipelines.
  • Ensure adherence to Agile methodologies and participate in sprint planning and execution.
  • Collaborate with the team to ensure the successful delivery of projects.
  • Provide on-site support and training in Pune.

Skills and Attributes:

  • Strong leadership and mentorship abilities.
  • Excellent problem-solving skills.
  • Effective communication and teamwork.
  • Ability to work in a fast-paced environment.
  • Passionate about technology and continuous learning.


Note: This is a part-time position paid on an hourly basis. The initial commitment is 4-8 hours per week, with potential fluctuations.


Join TVARIT and be a pivotal part of shaping the future of software development and data engineering.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹18L / yr
Informatica PowerCenter
ETL
Teradata DBA
Shell Scripting

Position Description


We are looking for a highly motivated, hands-on Sr. Database/Data Warehouse Data Analytics developer to work at our Bangalore, India location. Ideal candidate will have solid software technology background with capability in the making and supporting of robust, secure, and multi-platform financial applications to contribute to Fully Paid Lending (FPL) project. The successful candidate will be a proficient and productive developer, a team leader, have good communication skills, and demonstrate ownership.

 

Responsibilities


  • Produce service metrics, analyze trends, and identify opportunities to improve the level of service and reduce cost as appropriate.
  • Responsible for design, development and maintenance of database schema and objects throughout the lifecycle of the applications.
  • Supporting implemented solutions by monitoring and tuning queries and data loads, addressing user questions concerning data integrity, monitoring performance, and communicating functional and technical issues.
  • Helping the team by taking care of production releases.
  • Troubleshoot data issues and work with data providers for resolution.
  • Closely work with business and applications teams in implementing the right design and solution for the business applications.
  • Build reporting solutions for WM Risk Applications.
  • Work as part of a banking Agile Squad / Fleet.
  • Perform proof of concepts in new areas of development.
  • Support continuous improvement of automated systems.
  • Participate in all aspects of SDLC (analysis, design, coding, testing and implementation).

 

Required Skill

  • 5 to 7 Years of strong database (SQL) Knowledge, ETL (Informatica PowerCenter), Unix Shell Scripting.
  • Database (preferably Teradata) knowledge, database design, performance tuning, writing complex DB programs etc.
  • Demonstrate proficient skills in analysis and resolution of application performance problems.
  • Database fundamentals; relational and Datawarehouse concepts.
  • Should be able to lead a team of 2-3 members and guide them in their day to work technically and functionally.
  • Ensure designs, code and processes are optimized for performance, scalability, security, reliability, and maintainability.
  • Understanding of requirements of large enterprise applications (security, entitlements, etc.)
  • Provide technical leadership throughout the design process and guidance with regards to practices, procedures, and techniques. Serve as a guide and mentor for junior level Software Development Engineers
  • Exposure to JIRA or other ALM tools to create a productive, high-quality development environment.
  • Proven experience in working within an Agile framework.
  • Strong problem-solving skills and the ability to produce high quality work independently and work well in a team.
  • Excellent communication skills (written, interpersonal, presentation), with the ability to easily and effectively interact and negotiate with business stakeholders.
  • Ability and strong desire to learn new languages, frameworks, tools, and platforms quickly.
  • Growth mindset, personal excellence, collaborative spirit

Good to have skills.

  • Prior work experience with Azure or other cloud platforms such as Google Cloud, AWS, etc.
  • Exposure to programming languages python/R/ java and experience with implementing Data analytics projects.
  • Experience in Git and development workflows.
  • Prior experience in Banking and Financial domain.
  • Exposure to security-based lending is a plus.
  • Experience with Reporting/BI Tools is a plus.


Read more
Antier Solutions Pvt. Ltd (Antech)
Harsh Harsh  INT056
Posted by Harsh Harsh INT056
Remote only
3 - 4 yrs
₹7L - ₹10L / yr
ETL
GraphQL
OLAP
RESTful APIs
hasura
+2 more

Key Responsibilities:


- Design, develop, and maintain ETL processes and data pipelines.

- Work with OLAP databases and ensure efficient data storage and retrieval.

- Utilize Apache Pinot for real-time data analytics.

- Implement and manage data integration using Airbyte.

- Orchestrate data workflows with Apache Airflow.

- Develop and maintain RESTful and GraphQL APIs for data services.

- Deploy and manage applications on Hausra Cloud.

- Collaborate with cross-functional teams to understand data requirements and provide scalable solutions.

- Ensure data quality, integrity, and security across all pipelines.


Required Skills and Experience:


- Proven experience in ETL development and data pipeline management.

- Strong understanding of OLAP systems.

- Hands-on experience with Apache Pinot.

- Proficiency in using Airbyte for data integration.

- Expertise in Apache Airflow for workflow orchestration.

- Experience in developing RESTful and GraphQL APIs.

- Familiarity with Hausra Cloud or similar cloud platforms.

- Excellent problem-solving skills and attention to detail.

- Strong communication skills and ability to work in a collaborative team environment.


Qualifications:


  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • 3+ years of experience in backend engineering with a focus on data pipelines and ETL processes.
  • Demonstrated ability to work in a fast-paced, dynamic environment.


Note: Contractual Job for 06 Months

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Bengaluru (Bangalore)
10 - 15 yrs
₹10L - ₹15L / yr
Microsoft Windows Azure
Snowflake
Delivery Management
ETL
PySpark
+2 more
  • Sr. Solution Architect 
  • Job Location – Bangalore
  • Need candidates who can join in 15 days or less.
  • Overall, 12-15 years of experience.

 

Looking for this tech stack in a Sr. Solution Architect (who also has a Delivery Manager background). Someone who has heavy business and IT stakeholder collaboration and negotiation skills, someone who can provide thought leadership, collaborate in the development of Product roadmaps, influence decisions, negotiate effectively with business and IT stakeholders, etc.

 

  • Building data pipelines using Azure data tools and services (Azure Data Factory, Azure Databricks, Azure Function, Spark, Azure Blob/ADLS, Azure SQL, Snowflake..)
  • Administration of cloud infrastructure in public clouds such as Azure
  • Monitoring cloud infrastructure, applications, big data pipelines and ETL workflows
  • Managing outages, customer escalations, crisis management, and other similar circumstances.
  • Understanding of DevOps tools and environments like Azure DevOps, Jenkins, Git, Ansible, Terraform.
  • SQL, Spark SQL, Python, PySpark
  • Familiarity with agile software delivery methodologies
  • Proven experience collaborating with global Product Team members, including Business Stakeholders located in NA


Read more
Nyteco

at Nyteco

2 candid answers
1 video
Alokha Raj
Posted by Alokha Raj
Remote only
4 - 6 yrs
₹17L - ₹20L / yr
Data Transformation Tool (DBT)
ETL
SQL
Big Data
Google Cloud Platform (GCP)
+2 more

Join Our Journey

Jules develops an amazing end-to-end solution for recycled materials traders, importers and exporters. Which means a looooot of internal, structured data to play with in order to provide reporting, alerting and insights to end-users. With about 200 tables, covering all business processes from order management, to payments including logistics, hedging and claims, the wealth the data entered in Jules can unlock is massive. 


After working on a simple stack made of PostGres, SQL queries and a visualization solution, the company is now ready to set-up its data stack and only misses you. We are thinking DBT, Redshift or Snowlake, Five Tran, Metabase or Luzmo etc. We also have an AI team already playing around text driven data interaction. 


As a Data Engineer at Jules AI, your duties will involve both data engineering and product analytics, enhancing our data ecosystem. You will collaborate with cross-functional teams to design, develop, and sustain data pipelines, and conduct detailed analyses to generate actionable insights.


Roles And Responsibilities:

  • Work with stakeholders to determine data needs, and design and build scalable data pipelines.
  • Develop and sustain ELT processes to guarantee timely and precise data availability for analytical purposes.
  • Construct and oversee large-scale data pipelines that collect data from various sources.
  • Expand and refine our DBT setup for data transformation.
  • Engage with our data platform team to address customer issues.
  • Apply your advanced SQL and big data expertise to develop innovative data solutions.
  • Enhance and debug existing data pipelines for improved performance and reliability.
  • Generate and update dashboards and reports to share analytical results with stakeholders.
  • Implement data quality controls and validation procedures to maintain data accuracy and integrity.
  • Work with various teams to incorporate analytics into product development efforts.
  • Use technologies like Snowflake, DBT, and Fivetran effectively.


Mandatory Qualifications:

  • Hold a Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Possess at least 4 years of experience in Data Engineering, ETL Building, database management, and Data Warehousing.
  • Demonstrated expertise as an Analytics Engineer or in a similar role.
  • Proficient in SQL, a scripting language (Python), and a data visualization tool.
  • Mandatory experience in working with DBT.
  • Experience in working with Airflow, and cloud platforms like AWS, GCP, or Snowflake.
  • Deep knowledge of ETL/ELT patterns.
  • Require at least 1 year of experience in building Data pipelines and leading data warehouse projects.
  • Experienced in mentoring data professionals across all levels, from junior to senior.
  • Proven track record in establishing new data engineering processes and navigating through ambiguity.
  • Preferred Skills: Knowledge of Snowflake and reverse ETL tools is advantageous.


Grow, Develop, and Thrive With Us

  • Global Collaboration: Work with a dynamic team that’s making an impact across the globe, in the recycling industry and beyond. We have customers in India, Singapore, United-States, Mexico, Germany, France and more
  • Professional Growth: a highway toward setting-up a great data team and evolve into a leader
  • Flexible Work Environment: Competitive compensation, performance-based rewards, health benefits, paid time off, and flexible working hours to support your well-being.


Apply to us directly : https://nyteco.keka.com/careers/jobdetails/41442

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Pune, Hyderabad, Ahmedabad, Chennai
3 - 7 yrs
₹8L - ₹15L / yr
AWS Lambda
Amazon S3
Amazon VPC
Amazon EC2
Amazon Redshift
+3 more

Technical Skills:


  • Ability to understand and translate business requirements into design.
  • Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
  • Experience in creating ETL jobs using Python/PySpark.
  • Proficiency in creating AWS Lambda functions for event-based jobs.
  • Knowledge of automating ETL processes using AWS Step Functions.
  • Competence in building data warehouses and loading data into them.


Responsibilities:


  • Understand business requirements and translate them into design.
  • Assess AWS infrastructure needs for development work.
  • Develop ETL jobs using Python/PySpark to meet requirements.
  • Implement AWS Lambda for event-based tasks.
  • Automate ETL processes using AWS Step Functions.
  • Build data warehouses and manage data loading.
  • Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Read more
Optisol Business Solutions Pvt Ltd
Veeralakshmi K
Posted by Veeralakshmi K
Remote, Chennai, Coimbatore, Madurai
4 - 10 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
+5 more

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
bitsCrunch technology pvt ltd
Remote only
3 - 7 yrs
₹5L - ₹10L / yr
SQL
skill iconPython
skill iconJavascript
NOSQL Databases
Web3js
+2 more

Job Description: 

We are looking for an experienced SQL Developer to become a valued member of our dynamic team. In the role of SQL Developer, you will be tasked with creating top-notch database solutions, fine-tuning SQL databases, and providing support for our applications and systems. Your proficiency in SQL database design, development, and optimization will be instrumental in delivering efficient and dependable solutions to fulfil our business requirements.


Responsibilities:

 ● Create high-quality database solutions that align with the organization's requirements and standards.

● Design, manage, and fine-tune SQL databases, queries, and procedures to achieve optimal performance and scalability.

● Collaborate on the development of DBT pipelines to facilitate data transformation and modelling within our data warehouse.

● Evaluate and interpret ongoing business report requirements, gaining a clear understanding of the data necessary for insightful reporting.

● Conduct research to gather the essential data for constructing relevant and valuable reporting materials for stakeholders.

● Analyse existing SQL queries to identify areas for performance enhancements, implementing optimizations for greater efficiency.

● Propose new queries to extract meaningful insights from the data and enhance reporting capabilities.

● Develop procedures and scripts to ensure smooth data migration between systems, safeguarding data integrity.

● Deliver timely management reports on a scheduled basis to support decision-making processes.

● Investigate exceptions related to asset movements to maintain accurate and dependable data records.


Duties and Responsibilities: 

● A minimum of 3 years of hands-on experience in SQL development and administration, showcasing a strong proficiency in database management.

● A solid grasp of SQL database design, development, and optimization techniques.

● A Bachelor's degree in Computer Science, Information Technology, or a related field.

● An excellent understanding of DBT (Data Build Tool) and its practical application in data transformation and modelling.

● Proficiency in either Python or JavaScript, as these are commonly utilized for data-related tasks.

● Familiarity with NoSQL databases and their practical application in specific scenarios.

● Demonstrated commitment and pride in your work, with a focus on contributing to the company's overall success.

● Strong problem-solving skills and the ability to collaborate effectively within a team environment.

● Excellent interpersonal and communication skills that facilitate productive collaboration with colleagues and stakeholders.

● Familiarity with Agile development methodologies and tools that promote efficient project management and teamwork.

Read more
Technogen India PvtLtd

at Technogen India PvtLtd

4 recruiters
Mounika G
Posted by Mounika G
Hyderabad
11 - 16 yrs
₹24L - ₹27L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
SQL
+1 more

Daily and monthly responsibilities

  • Review and coordinate with business application teams on data delivery requirements.
  • Develop estimation and proposed delivery schedules in coordination with development team.
  • Develop sourcing and data delivery designs.
  • Review data model, metadata and delivery criteria for solution.
  • Review and coordinate with team on test criteria and performance of testing.
  • Contribute to the design, development and completion of project deliverables.
  • Complete in-depth data analysis and contribution to strategic efforts
  • Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.

 

Basic Qualifications

  • Bachelor’s degree.
  • 5+ years of data analysis working with business data initiatives.
  • Knowledge of Structured Query Language (SQL) and use in data access and analysis.
  • Proficient in data management including data analytical capability.
  • Excellent verbal and written communications also high attention to detail.
  • Experience with Python.
  • Presentation skills in demonstrating system design and data analysis solutions.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Lokesh Manikappa
Posted by Lokesh Manikappa
Mumbai
4 - 9 yrs
₹15L - ₹32L / yr
skill iconJava
ETL
SQL
Data engineering
skill iconScala

Java/Scala + Data Engineer

 

Experience: 5-10 years

Location: Mumbai

Notice: Immediate to 30 days

Required Skills:

·       5+ years of software development experience.

·       Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

·       Developing Data Pipelines (Batch/Streaming). Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

·       Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

·       Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

·       Proficient in Linux/Unix, including shell scripting.

·       Ability to translate business and architectural features into quality, consistent software design.

·       Solid understanding of programming practices, emphasizing reusable, flexible, and reliable code.

Read more
Piako
PiaKo Store
Posted by PiaKo Store
Kolkata
4 - 8 yrs
₹12L - ₹24L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
ETL

We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!

At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.

We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.

As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.

Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.

Key Responsibilities:

  • ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
  • AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
  • Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
  • Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
  • Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
  • Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
  • Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
  • Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
  • Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
  • Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer with a focus on ETL pipeline development.
  • Strong proficiency in Python programming.
  • In-depth knowledge of AWS serverless technologies and services.
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience with version control systems (e.g., Git).
  • Strong SQL skills for data extraction and transformation.
  • Excellent problem-solving and troubleshooting abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Effective communication skills for articulating technical concepts to non-technical stakeholders.
  • Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.

Preferred Experience:

  • Knowledge of data orchestration and workflow management tools
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.

What You Can Expect:

- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.

- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.

- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.

They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.

If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.

Apply today for more information about this exciting opportunity.

Onsite Location: Kolkata, India (Salt Lake Sector V)


Read more
This opening is with an MNC

This opening is with an MNC

Agency job
via LK Consultants by Namita Agate
Bengaluru (Bangalore)
1 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+9 more

ROLE AND RESPONSIBILITIES

Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should

be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and

transform data into insights that drive business value, through use of data analytics, data visualization and data

modeling techniques.


QUALIFICATIONS AND EDUCATION REQUIREMENTS

Technical Bachelor’s Degree.

Non-Technical Degree holders should have 1+ years of relevant experience.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
dataeaze systems

at dataeaze systems

1 recruiter
Ankita Kale
Posted by Ankita Kale
Remote only
5 - 8 yrs
₹12L - ₹22L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
ETL

POST - SENIOR DATA ENGINEER WITH AWS


Experience : 5 years


Must-have:

• Highly skilled in Python and PySpark

• Have expertise in writing Glue jobs ETL script, AWS

• Experience in working with Kafka

• Extensive SQL DB experience – Postgres

Good-to-have:

• Experience in working with data analytics and modelling

• Hands on Experience of PowerBI visualization tool

• Knowledge and hands-on on version control system - Git Common:

• Excellent communication and presentation skills (written and verbal) to all levels

of an organization

• Should be results oriented with ability to prioritize and drive multiple initiatives to

complete work you're doing on time

• Proven ability to influence a diverse geographically dispersed group of

individuals to facilitate, moderate, and influence productive design and implementation

discussions driving towards results


Shifts - Flexible ( might have to work as per US Shift timings for meetings ).

Employment Type - Any

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
globe teleservices
deepshikha thapar
Posted by deepshikha thapar
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
ETL
skill iconPython
Informatica
Talend



Good experience in the Extraction, Transformation, and Loading (ETL) of data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL tool on Oracle, and SQL Server Databases.



 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

 Used various transformations like Filter, Expression, Sequence Generator, Update Strategy,

Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

 Developed mapping parameters and variables to support SQL override.

 Created applets to use them in different mappings.

 Created sessions, configured workflows to extract data from various sources, transformed data,

and loading into the data warehouse.

 Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

 Modified existing mappings for enhancements of new business requirements.

 Involved in Performance tuning at source, target, mappings, sessions, and system levels.

 Prepared migration document to move the mappings from development to testing and then to

production repositories

 Extensive experience in developing Stored Procedures, Functions, Views and Triggers, Complex

SQL queries using PL/SQL.


 Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica

/Talend sessions as well as performance tuning of mappings and sessions.

 Experience in all phases of Data warehouse development from requirements gathering for the

data warehouse to develop the code, Unit Testing, and Documenting.

 Extensive experience in writing UNIX shell scripts and automation of the ETL processes using

UNIX shell scripting.

 Experience in using Automation Scheduling tools like Control-M.

 Hands-on experience across all stages of Software Development Life Cycle (SDLC) including

business requirement analysis, data mapping, build, unit testing, systems integration, and user

acceptance testing.

 Build, operate, monitor, and troubleshoot Hadoop infrastructure.

 Develop tools and libraries, and maintain processes for other engineers to access data and write

MapReduce programs.

Read more
Service based company

Service based company

Agency job
via Vmultiply solutions by Mounica Buddharaju
Ahmedabad, Rajkot
2 - 4 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
ETL


Qualifications :

  • Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
  • Good knowledge of MVC, Entity Framework, and Web API/WCF.
  • ASP.NET Core knowledge is preferred.
  • Creating APIs / Using third-party APIs
  • Working knowledge of Angular is preferred.
  • Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
  • Solid understanding of object-oriented development principles
  • Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
  • Strong understanding of object-oriented programming
  • Ability to create reusable C# libraries
  • Must be able to write clean comments, readable C# code, and the ability to self-learn.
  • Working knowledge of GIT

Qualities required :

Over above tech skill we prefer to have

  • Good communication and Time Management Skill.
  • Good team player and ability to contribute on a individual basis.

  • We provide the best learning and growth environment for candidates.












Skills:


    NET Core

   .NET Framework

    ASP.NET Core

    ASP.NET MVC

    ASP.NET Web API  

   C#

   HTML


Read more
Compile

at Compile

16 recruiters
Sarumathi NH
Posted by Sarumathi NH
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
Spark

You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.

You will

  • Work on building and maintaining data pipelines (specifically RWD).
  • Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
  • Scheduling and maintaining pipeline jobs for RWD.
  • Develop, test, and implement data solutions based on the design.
  • Design and implement quality checks on existing and new data pipelines.
  • Ensure adherence to security and compliance that is required for the products.
  • Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.

You have

  • Hands-on experience with ETL process (min of 5 years).
  • Excellent communication skills and ability to work with multiple vendors.
  • High proficiency with Spark, SQL.
  • Proficiency in Data modeling, validation, quality check, and data engineering concepts.
  • Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
  • Familiarity with version control technologies, and CI/CD systems.
  • Understanding of scheduling tools like Airflow/Prefect.
  • Min of 3 years of experience managing data warehouses.
  • Familiarity with healthcare datasets is a plus.

Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.         

Read more
A Product Based Client,Chennai

A Product Based Client,Chennai

Agency job
via SangatHR by Anna Poorni
Chennai
4 - 8 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
PySpark
+2 more

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
globe teleservices
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹30L / yr
talend
Informatica
ETL

Good experience in Extraction, Transformation, and Loading (ETL) of data from various sources

into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL

tool on Oracle, and SQL Server Databases.

 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
Emids Technologies

at Emids Technologies

2 candid answers
Rima Mishra
Posted by Rima Mishra
Bengaluru (Bangalore)
5 - 10 yrs
₹4L - ₹18L / yr
Jasper
JasperReports
ETL
JasperSoft
OLAP
+3 more

Job Description - Jasper 

  • Knowledge of Jasper report server administration, installation and configuration
  • Knowledge of report deployment and configuration
  • Knowledge of Jaspersoft Architecture and Deployment
  • Knowledge of User Management in Jaspersoft Server
  • Experience in developing Complex Reports using Jaspersoft Server and Jaspersoft Studio
  • Understand the Overall architecture of Jaspersoft BI
  • Experience in creating Ad Hoc Reports, OLAP, Views, Domains
  • Experience in report server (Jaspersoft) integration with web application
  • Experience in JasperReports Server web services API and Jaspersoft Visualise.JS Web service API
  • Experience in creating dashboards with visualizations
  • Experience in security and auditing, metadata layer
  • Experience in Interacting with stakeholders for requirement gathering and Analysis
  • Good knowledge ETL design and development, logical and physical data modeling (relational and dimensional)
  • Strong self- initiative to strive for both personal & technical excellence.
  • Coordinate efforts across Product development team and Business Analyst team.
  • Strong business and data analysis skills.
  • Domain knowledge of Healthcare an advantage.
  • Should be strong on Co- ordinate with onshore resources on development.
  • Data oriented professional with good communications skills and should have a great eye for detail.
  • Interpret data, analyze results and provide insightful inferences
  • Maintain relationship with Business Intelligence stakeholders
  • Strong Analytical and Problem Solving skills 


Read more
Personal Care Product Manufacturing

Personal Care Product Manufacturing

Agency job
via Qrata by Rayal Rajan
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
Invesco

at Invesco

4 candid answers
Chandana Srii
Posted by Chandana Srii
Hyderabad
3 - 7 yrs
₹10L - ₹30L / yr
skill iconReact.js
skill iconJavascript
Plotly
skill iconR Language
D3.js
+2 more

Invesco is seeking a skilled React.js Developer with a strong background in data analytics to join our team. The Engineer within the Enterprise Risk function will manage complex data engineering, analysis, and programming tasks in support of the execution of the Enterprise Risk and Internal Audit activities and projects as defined by the Enterprise Risk Analytics leadership team. The Engineer will manage and streamline data extraction, transformation, and load processes, design use cases, analyze data, and apply creativity and data science techniques to deliver effective and efficient solutions enabling greater risk intelligence and insights.

 

Key Responsibilities / Duties:

 

  • Acquire, transform, and manage data supporting risk and control-related activities
  • Design, build, and maintain data analytics and data science tools supporting individual tasks and projects.
  • Maintain programming code, software packages, and databases to support ongoing analytics activities.
  • Exercise judgment in determining the application of data analytics for business processes, including the identification and location of data sources.
  • Actively discover data analytics capabilities within the firm and leverage such capabilities where possible.
  • Introduce new data analytics-related tools and technologies to business partners and consumers.
  • Support business partners, data analysts, consumers’ understanding of product logic and related system processes
  • Share learnings and alternative techniques with other Engineers and Data Analysts.
  • Perform other duties and special projects as assigned by the Enterprise Risk Analytics leadership team and other leaders across Enterprise Risk and Internal Audit.
  • Actively contribute to developing a culture of innovation within the department and risk and control awareness throughout the organization.
  • Keep Head of Data Science & Engineering and departmental leadership informed of activities.

Work Experience / Knowledge:

 

  • Minimum 5 years of experience in data analysis, data management, software development or data-related risk management roles; previous experience in programming will be considered.
  • Experience within the financial services sector preferred

Skills / Other Personal Attributes Required:

 

  • Proactive problem solver with the ability to identify, design, and deliver solutions based on high level objectives and detailed requirements. Thoroughly identify and investigate issues and determine the appropriate course of action
  • Excellent in code development supporting data analytics and visualization, preferably programming languages and libraries such as JavaScript, R, Python, NextUI, ReactUI, Shiny, Streamlit, Plotly and D3.js
  • Excellent with data extraction, transformation, and load processes (ETL), structured query language (SQL), and database management. 
  • Strong self-learner to continuously develop new technical capabilities to become more efficient and productive
  • Experience using end user data analytics software such as Tableau, PowerBI, SAS, and Excel a plus
  • Proficient with various disciplines of data science, such as machine learning, natural language processing and network science
  • Experience with end-to-end implementation of web applications on AWS, including using services such as EC2, EKS, RDS, ALB, Route53 and Airflow a plus
  • Self-starter and motivated; must be able to work without frequent direct supervision
  • Proficient with Microsoft Office applications (Teams, Outlook, MS Word, Excel, PowerPoint etc.)
  • Excellent analytical and problem-solving skills
  • Strong project management and administrative skills
  • Strong written and verbal communication skills (English)
  • Results-oriented and comfortable as an individual contributor on specific assignments
  • Ability to handle confidential information and communicate clearly with individuals at a wide range of levels on sensitive matters
  • Demonstrated ability to work in a diverse, cross-functional, and international environment
  • Adaptable and comfortable with changing environment
  • Demonstrates high professional ethics

Formal Education:

 

  • Bachelor’s degree in Information Systems, Computer Science, Computer Engineering, Mathematics, Statistics, Data Science, or Statistics preferred. Other technology or quantitative finance-related degrees considered depending upon relevant experience
  • MBA, Master’s degree in Information Systems, Computer Science, Mathematics, Statistics, Data Science, or Finance a plus

License / Registration / Certification:

 

  • Professional data science, analytics, business intelligence, visualization, and/or development designation (e.g., CAP, CBIP, or other relevant product-specific certificates) or actively pursuing the completion of such designation preferred
  • Other certifications considered depending on domain and relevant experience

Working Conditions:

 

Potential for up to 10% domestic and international travel

 

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
InfoCepts
Lalsaheb Bepari
Posted by Lalsaheb Bepari
Chennai, Pune, Nagpur
7 - 10 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Responsibilities:

 

• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing

• Implementing Spark processing based ETL frameworks

• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

• Modifying the Informatica-Teradata & Unix based data pipeline

• Enhancing the Talend-Hive/Spark & Unix based data pipelines

• Develop and Deploy Scala/Python based Spark Jobs for ETL processing

• Strong SQL & DWH concepts.

 

Preferred Background:

 

• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs

• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives

• Understanding of EDW system of business and creating High level design document and low level implementation document

• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document

• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

Read more
Exponentia.ai

at Exponentia.ai

1 product
1 recruiter
Vipul Tiwari
Posted by Vipul Tiwari
Mumbai
7 - 10 yrs
₹13L - ₹19L / yr
Project Management
IT project management
Software project management
Business Intelligence (BI)
Data Warehouse (DWH)
+8 more

Role: Project Manager

Experience: 8-10 Years

Location: Mumbai


Company Profile:



Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expanding across machine learning, Data Engineering and Analytics functions. Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.

One of the top partners of Data bricks, Azure, Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik and "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.

Get to know more about us at http://www.exponentia.ai and https://in.linkedin.com/company/exponentiaai 


Role Overview:


·        Project manager shall be responsible to oversee and take responsibility for the successful delivery of a range of projects in Business Intelligence, Data warehousing, and Analytics/AI-ML.

·        Project manager is expected to manage the project and lead the teams of BI engineers, data engineers, data scientists and application developers.


Job Responsibilities:


·        Efforts estimation, creating a project plan, planning milestones, activities and tracking the progress.

·        Identify risks and issues. Come up with a mitigation plan.

·        Status reporting to both internal and external stakeholders.

·        Communicate with all stakeholders.

·        Manage end-to-end project lifecycle - requirements gathering, design, development, testing and go-live.

·        Manage end-to-end BI or data warehouse projects.

·        Must have experience in running Agile-based project development.


Technical skills


·        Experience in Business Intelligence Data warehousing or Analytics projects.

·        Understand data lake and data warehouse solutions including ETL pipelines.

·        Good to have - Knowledge of Azure blob storage, azure data factory and Synapse analytics.

·        Good to have - Knowledge of Qlik Sense or Power BI

·        Good to have - Certified in PMP/Prince 2 / Agile Project management.

·        Excellent written and verbal communication skills.


 Education:

MBA, B.E. or B. Tech. or MCA degree

Read more
An 8 year old IT Services and consulting company.

An 8 year old IT Services and consulting company.

Agency job
via Startup Login by Shreya Sanchita
Hyderabad, Bengaluru (Bangalore)
8 - 12 yrs
₹30L - ₹50L / yr
skill iconPHP
skill iconJavascript
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
+17 more

CTC Budget: 35-50LPA

Location: Hyderabad/Bangalore

Experience: 8+ Years


Company Overview:


An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.


Work with, learn from, and contribute to a diverse, collaborative

development team

● Use plenty of PHP, Go, JavaScript, MySQL, PostgreSQL, ElasticSearch,

Redshift, AWS Services and other technologies

● Build efficient and reusable abstractions and systems

● Create robust cloud-based systems used by students globally at scale

● Experiment with cutting edge technologies and contribute to the

company’s product roadmap


● Deliver data at scale to bring value to clients Requirements


You will need:

● Experience working with a server side language in a full-stack environment

● Experience with various database technologies (relational, nosql,

document-oriented, etc) and query concepts in high performance

environments

● Experience in one of these areas: React, Backbone

● Understanding of ETL concepts and processes

● Great knowledge of design patterns and back end architecture best

practices

● Sound knowledge of Front End basics like JavaScript, HTML, CSS

● Experience with TDD, automated testing

● 12+ years’ experience as a developer

● Experience with Git or Mercurial

● Fluent written & spoken English

It would be great if you have:

● B.Sc or M.Sc degree in Software Engineering, Computer Science or similar

● Experience and/or interest in API Design

● Experience with Symfony and/or Doctrine

● Experience with Go and Microservices

● Experience with message queues e.g. SQS, Kafka, Kinesis, RabbitMQ

● Experience working with a modern Big Data stack

● Contributed to open source projects

● Experience working in an Agile environment

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort