Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
Springer Capital
Andrew Rose
Posted by Andrew Rose
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
Warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process. 

 

Responsibilities: 

▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources 

▪ Develop ETL processes to collect, clean, and transform data from internal and external systems 

▪ Support integration of data into dashboards, analytics tools, and reporting systems 

▪ Collaborate with data analysts and software developers to improve data accessibility and performance 

▪ Document workflows and maintain data infrastructure best practices 

▪ Assist in identifying opportunities to automate repetitive data tasks 

Read more
Deltek
Harsha Mehrotra
Posted by Harsha Mehrotra
Remote only
4 - 6 yrs
₹11L - ₹20L / yr
MS SQLServer
skill iconPython
skill iconGit
ETL

Position Responsibilities:


This role is part of the consulting team and will be responsible for performing data migrations from legacy and third-party databases, as well as data changes and data corrections.  This role is customer-facing and is expected to conduct customer phone calls and presentations. Verifying the accuracy and completeness of all projects, to ensure quality work, is a crucial part of this role.  In addition, this role is expected to contribute to the maintenance and enhancement of Deltek’s data migration tools and documents, as well as provide feedback and solutions to improve the scalability and overall implementation experience. The ideal candidate will be quality-focused while managing multiple priorities in a fast-paced environment to meet business and technical demands across a broad spectrum of customers from multiple industry verticals.


Job Duties:

  • Performs data migrations, data changes, and data corrections.
  • Develop and maintain extract, transform, and load processes (e.g. Python, Airflow, SSIS) that handle diverse source systems and high-volume datasets.
  • Learn proprietary data tools for existing and new products. This includes understanding each product’s concepts, methods, procedures, technologies, and systems.
  • Build and govern a centralized repository of reusable, product-agnostic SQL queries, views, stored procedures, and functions.
  • Drive Git-based version control, code reviews and CI/CD for database artefacts.
  • Assists customers and consultants with cleaning up existing data and building substitution tables to prepare for data migrations and changes.
  • Verifies accuracy and completeness of converted data.
  • Lead root-cause investigations and collaborate on corrective actions with the broader team.
  • Provides thorough and timely communication regarding issues, change requests, and project statuses to the internal and external project team, including customers, project coordinators, functional consultants, and supervisors.
  • Displays excellent communication skills, including presentation, persuasion, and negotiation skills required in working with customers and co-workers, including the ability to communicate effectively and remain calm and courteous under pressure.
  • Author clear technical documentation (e.g. ETL designs, SQL library guidelines)
  • Documents all data migrations, changes, and corrections accurately and completely in conversion delivery documentation.
  • Works collaboratively in a team environment with a spirit of cooperation.
  • Mentor colleagues on SQL techniques and ETL patterns, fostering a culture of continuous learning.
  • Provides exceptional customer service.
  • Estimates time and budget requirements for data migrations and changes.
  • Works within budget and timeframe requirements for completion of each project.
  • Handles changes in scope through documentation in conversion delivery documentation, statement of work, and change orders.
  • Creates and maintains a procedures manual for data migrations, changes, and fixes
  • Files TFS tickets and communicates to the development team any bugs and enhancements needed for new and existing data tools to improve accuracy, timeliness, and profitability of work.


Qualifications:


Essential Skills & Experience

  • SQL Mastery: 5+ years crafting complex, high-performance T-SQL across platforms (e.g. SQL Server, PostgreSQL, Oracle).
  • ETL Expertise: Demonstrated success designing, building and operating end-to-end ETL/ELT pipelines in enterprise environments.
  • Scripting & Orchestration: Proficiency in at least one language (e.g. Python (preferable), PowerShell) and familiarity with orchestration tools (Airflow, Azure Data Factory, etc.).
  • Version Control & CI/CD: Strong Git workflows experience and automated deployment pipelines for database objects.
  • Client-Facing Experience: Comfortable engaging with clients to capture requirements, present technical options and deliver projects to schedule.
  • 3+ years of hands-on experience working in a customer-facing role
  • Advanced written and verbal communication skills using English
  • Advanced conflict management and negotiation
  • Advanced troubleshooting and problem-solving skills
  • Basic functional knowledge of Deltek VantagePoint – For validating purposes
  • Basic understanding of accounting principles


Read more
Service Based

Service Based

Agency job
via Vikash Technologies by Rishika Teja
Bengaluru (Bangalore), Mumbai
5 - 8 yrs
₹12L - ₹22L / yr
Salesforce
skill iconJavascript
SQL
ETL
Salesforce Lightning
+1 more


Skills :


JavaScript and LWC Expertise in backend development using Apex, Flows, Async Apex .


Understanding of Database concepts: SOQL, SOSL and SQL


Hands-on experience in API integration using SOAP, REST API, Graphql


Experience with ETL tools , Data migration, and Data governance


Experience with Apex Design Patterns, Integration Patterns and Apex testing framework


Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab, bitbucket 


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Priyanka Seshadri
Posted by Priyanka Seshadri
Hyderabad, Pune, Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
SQL
ETL
Banking
  • 5 -10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus

Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution
Read more
VDart

VDart

Agency job
via VDart by Don Blessing
Remote only
8 - 10 yrs
₹20L - ₹25L / yr
Cleo
EDI
EDI management
ERP management
Supply Chain Management (SCM)
+5 more

Role: Cleo EDI Solution Architect / Sr EDI Developer

Location : Remote

Start Date – asap


This is a niche technology (Cleo EDI), which enables the integration of ERP with Transp. Mgt/Extended Supply Chain etc

 

Expertise in designing and developing end-to-end integration solutions, especially B2B integrations involving EDI (Electronic Data Interchange) and APIs.

Familiarity with Cleo Integration Cloud or similar EDI platforms.

Strong experience with Azure Integration Services, particularly:

  • Azure Data Factory – for orchestrating data movement and transformation
  • Azure Functions – for serverless compute tasks in integration pipelines
  • Azure Logic Apps or Service Bus – for message handling and triggering workflows

Understanding of ETL/ELT processes and data mapping.

Solid grasp of EDI standards (e.g., X12, EDIFACT) and workflows.

Experience working with EDI developers and analysts to align business requirements with technical implementation.

Familiarity with Cleo EDI tools or similar platforms.

Develop and maintain EDI integrations using Cleo Integration Cloud (CIC), Cleo Clarify, or similar Cleo solutions.

Create, test, and deploy EDI maps for transactions such as 850, 810, 856, etc., and other EDI/X12/EDIFACT documents.

Configure trading partner setups, including communication protocols (AS2, SFTP, FTP, HTTPS).

Monitor EDI transaction flows, identify errors, troubleshoot, and implement fixes.

Collaborate with business analysts, ERP teams, and external partners to gather and analyze EDI requirements.

Document EDI processes, mappings, and configurations for ongoing support and knowledge sharing.

Provide timely support for EDI-related incidents, ensuring minimal disruption to business operations.

Participate in EDI onboarding projects for new trading partners and customers.

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore), Pune, Jaipur, Bhopal, Gurugram, Hyderabad
5 - 7 yrs
₹5L - ₹18L / yr
Software Testing (QA)
Manual testing
SQL
ETL

🚀 Hiring: Manual Tester

⭐ Experience: 5+ Years

📍 Location: Pan India

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Must-Have Skills:

✅5+ years of experience in Manual Testing

✅Solid experience in ETL, Database, and Report Testing

✅Strong expertise in SQL queries, RDBMS concepts, and DML/DDL operations

✅Working knowledge of BI tools such as Power BI

✅Ability to write effective Test Cases and Test Scenarios

Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
3 - 5 yrs
₹10L - ₹25L / yr
skill iconPython
PySpark
skill iconScala
Data engineering
ETL
+12 more

About Moative

Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.


Work you’ll do

As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.


Responsibilities

  • Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements
  • Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs


Who you are

You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration. 


You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.


Skills & Requirements

  • 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
  • Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
  • Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
  • Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
  • Experience with common relational SQL, NoSQL and Graph databases.
  • Strong experience with scripting languages: Python, PySpark, Scala, etc.
  • Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
  • Experience with big data tools (Spark, Kafka, etc) and stream processing.
  • Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
  • Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
  • Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Read more
Fountane inc
HR Fountane
Posted by HR Fountane
Remote only
5 - 9 yrs
₹18L - ₹32L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
AWS CloudFormation
ETL
skill iconDocker
+3 more

Position Overview: We are looking for an experienced and highly skilled Senior Data Engineer to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a customer-centric Data Engineer, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management


Key Responsibilities:


• Customer Collaboration:

– Partner with clients to gather and understand their business

requirements, translating them into actionable technical specifications.

– Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.


•Data Modeling & Integration:

– Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.

– Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.

– Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems


• Data Processing & Optimization:

– Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.

– Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.


• Data Governance & Security:

–Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).

–Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.


• Cross-Functional Collaboration:

– Work closely with data engineers, data scientists, and business

analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.

– Foster collaboration across teams to streamline data workflows and optimize solution delivery.


• Leveraging Advanced Technologies:

– Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide

smart, data-driven solutions to business challenges.

– Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.


• Cost Optimization:

–Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.

–Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.


Qualifications:


• Experience:

– Proven experience (5+ years) as a Data Engineer or similar role, designing and implementing data solutions at scale.

– Strong expertise in data modelling, data integration (ETL), and data transformation processes.

– Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).


• Technical Skills:

– Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache

NiFi, Talend).

– Strong understanding of data security protocols, privacy regulations, and compliance requirements.

– Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).


• AI & Machine Learning Exposure:

– Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.

–Ability to apply advanced algorithms and automation techniques to improve business processes.


• Soft Skills:

– Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.

– Strong problem-solving ability with a customer-centric approach to solution design.

– Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.


• Education:

– Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).


LIFE AT FOUNTANE:

  • Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
  • Competitive pay
  • Health insurance for spouses, kids, and parents.
  • PF/ESI or equivalent
  • Individual/team bonuses
  • Employee stock ownership plan
  • Fun/challenging variety of projects/industries
  • Flexible workplace policy - remote/physical
  • Flat organization - no micromanagement
  • Individual contribution - set your deadlines
  • Above all - culture that helps you grow exponentially!


A LITTLE BIT ABOUT THE COMPANY:

Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.

We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Bengaluru (Bangalore)
3 - 8 yrs
₹9L - ₹15L / yr
Salesforce development
Oracle Application Express (APEX)
Salesforce Lightning
SQL
ETL
+6 more

1. Software Development Engineer - Salesforce

What we ask for

We are looking for strong engineers to build best in class systems for commercial &

wholesale banking at Bank, using Salesforce service cloud. We seek experienced

developers who bring deep understanding of salesforce development practices, patterns,

anti-patterns, governor limits, sharing & security model that will allow us to architect &

develop robust applications.

You will work closely with business, product teams to build applications which provide end

users with intuitive, clean, minimalist, easy to navigate experience

Develop systems by implementing software development principles and clean code

practices scalable, secure, highly resilient, have low latency

Should be open to work in a start-up environment and have confidence to deal with complex

issues keeping focus on solutions and project objectives as your guiding North Star


Technical Skills:

● Strong hands-on frontend development using JavaScript and LWC

● Expertise in backend development using Apex, Flows, Async Apex

● Understanding of Database concepts: SOQL, SOSL and SQL

● Hands-on experience in API integration using SOAP, REST API, graphql

● Experience with ETL tools , Data migration, and Data governance

● Experience with Apex Design Patterns, Integration Patterns and Apex testing

framework

● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,

bitbucket

● Should have worked with at least one programming language - Java, python, c++

and have good understanding of data structures


Preferred qualifications

● Graduate degree in engineering

● Experience developing with India stack

● Experience in fintech or banking domain

Read more
Egen Solutions
Hemavathi Panduri
Posted by Hemavathi Panduri
Hyderabad
4 - 8 yrs
₹12L - ₹25L / yr
skill iconPython
Google Cloud Platform (GCP)
ETL
Apache Airflow

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.


Key Responsibilities:

  • Design, develop, test, and maintain scalable ETL data pipelines using Python.
  • Work extensively on Google Cloud Platform (GCP) services such as:
  • Dataflow for real-time and batch data processing
  • Cloud Functions for lightweight serverless compute
  • BigQuery for data warehousing and analytics
  • Cloud Composer for orchestration of data workflows (based on Apache Airflow)
  • Google Cloud Storage (GCS) for managing data at scale
  • IAM for access control and security
  • Cloud Run for containerized applications
  • Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
  • Implement and enforce data quality checks, validation rules, and monitoring.
  • Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
  • Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
  • Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
  • Document pipeline designs, data flow diagrams, and operational support procedures.

Required Skills:

  • 4–8 years of hands-on experience in Python for backend or data engineering projects.
  • Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
  • Solid understanding of data pipeline architecture, data integration, and transformation techniques.
  • Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
  • Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).



Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Indore
0 - 2 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
pandas
NumPy
Blockchain
+1 more

About Us

Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. 


As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.


What We Build

  • Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
  • DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
  • ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
  • High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
  • Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.


Evaluation Process

  • HR Discussion – A brief conversation to understand your motivation and alignment with the role.
  • Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
  • Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
  • Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
  • Final Interview – A concluding round to explore your background, interests, and team fit in depth.
  • Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.


Job Description : Blockchain Data & ML Engineer


As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.


What You’ll Work On

  • Build and maintain ETL pipelines for ingesting and processing blockchain data.
  • Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
  • Evaluate model performance, tune hyperparameters, and document experimental results.
  • Develop monitoring tools to track model accuracy, data drift, and system health.
  • Collaborate with infrastructure and execution teams to integrate ML components into production systems.
  • Design and maintain databases and storage systems to efficiently manage large-scale datasets.


Ideal Traits

  • Strong in data structures, algorithms, and core CS fundamentals.
  • Proficiency in any programming language
  • Familiarity with backend systems, APIs, and database design, along with a basic    understanding of machine learning and blockchain fundamentals.
  • Curiosity about how blockchain systems and crypto markets work under the hood.
  • Self-motivated, eager to experiment and learn in a dynamic environment.


Bonus Points For

  • Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
  • Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
  • Participation in hackathons or open-source contributions.


What You’ll Gain

  • Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
  • Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
  • Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
  • Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters


What We Value:

  • Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
  • Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
  • Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
  • Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.

Compensation:

  • INR 6 - 12 LPA
  • Performance Bonuses: Linked to contribution, delivery, and impact.



Read more
Bengaluru (Bangalore)
5 - 7 yrs
₹12L - ₹15L / yr
Enterprise Resource Planning (ERP)
skill iconData Analytics
SAP
JD Edwards
ETL
+3 more

Location: Bangalore – Hebbal – 5 Days - WFO

Type:  Contract – 6 Months to start with, extendable

Experience Required: 5+ years in Data Analysis, with ERP migration experience


Key Responsibilities:

  • Analyze and map data from SAP to JD Edwards structures.
  • Define data transformation rules and business logic.
  • Assist with data extraction, cleansing, and enrichment.
  • Collaborate with technical teams to design and execute ETL processes.
  • Perform data validation and reconciliation before and after migration.
  • Work closely with business stakeholders to understand master and transactional data requirements.
  • Support the creation of reports to validate data accuracy in JDE.
  • Document data mapping, cleansing rules, and transformation processes.
  • Participate in testing cycles and assist with UAT data validation.


Required Skills and Qualifications:

  • Strong experience in SAP ERP data models (FI, MM, SD, etc.).
  • Knowledge of JD Edwards EnterpriseOne data structure is a plus.
  • Proficiency in Excel, SQL, and data profiling tools.
  • Experience in data migration tools like SAP BODS, Talend, or Informatica.
  • Strong analytical, problem-solving, and documentation skills.
  • Excellent communication and collaboration skills.
  • ERP migration project experience is essential.


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Indore
0 - 2 yrs
₹6L - ₹12L / yr
Blockchain
ETL
Artificial Intelligence (AI)
Generative AI
skill iconPython
+3 more

About Us

Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. 


As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.


What We Build

  • Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
  • DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
  • ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
  • High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
  • Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.


Evaluation Process

  • HR Discussion – A brief conversation to understand your motivation and alignment with the role.
  • Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
  • Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
  • Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
  • Final Interview – A concluding round to explore your background, interests, and team fit in depth.
  • Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.


Blockchain Data & ML Engineer


As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.


What You’ll Work On

  • Build and maintain ETL pipelines for ingesting and processing blockchain data.
  • Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
  • Evaluate model performance, tune hyperparameters, and document experimental results.
  • Develop monitoring tools to track model accuracy, data drift, and system health.
  • Collaborate with infrastructure and execution teams to integrate ML components into production systems.
  • Design and maintain databases and storage systems to efficiently manage large-scale datasets.


Ideal Traits

  • Strong in data structures, algorithms, and core CS fundamentals.
  • Proficiency in any programming language
  • Curiosity about how blockchain systems and crypto markets work under the hood.
  • Self-motivated, eager to experiment and learn in a dynamic environment.


Bonus Points For

  • Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
  • Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
  • Participation in hackathons or open-source contributions.


What You’ll Gain

  • Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
  • Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
  • Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
  • Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters


What We Value:

  • Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
  • Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
  • Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
  • Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.


Compensation:

  • INR 6 - 12 LPA
  • Performance Bonuses: Linked to contribution, delivery, and impact.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
10 - 15 yrs
₹10L - ₹18L / yr
Solution architecture
Denodo
Data Virtualization
Data architecture
SQL
+5 more

Job Title : Solution Architect – Denodo

Experience : 10+ Years

Location : Remote / Work from Home

Notice Period : Immediate joiners preferred


Job Overview :

We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.


Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.


Key Responsibilities :

  • Architect and design scalable data virtualization solutions using Denodo.
  • Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
  • Ensure adherence to best practices in data governance, performance, and security.
  • Integrate Denodo with diverse data sources and optimize system performance.
  • Mentor and train team members on Denodo platform capabilities.
  • Lead tool evaluations and recommend suitable data integration technologies.
  • Stay updated with emerging trends in data virtualization and integration.

Required Qualifications :

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • 10+ Years of experience in data architecture and integration.
  • Proven expertise in Denodo and data virtualization frameworks.
  • Strong proficiency in SQL and data modeling.
  • Hands-on experience with ETL processes and data integration tools.
  • Excellent communication, presentation, and stakeholder management skills.
  • Ability to lead technical discussions and influence architectural decisions.
  • Denodo or data architecture certifications are a strong plus.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Remote, Bengaluru (Bangalore)
5 - 9 yrs
Best in industry
skill iconPython
SDET
BDD
SQL
Data Warehouse (DWH)
+2 more

Primary skill set: QA Automation, Python, BDD, SQL 

As Senior Data Quality Engineer you will:

  • Evaluate product functionality and create test strategies and test cases to assess product quality.
  • Work closely with the on-shore and the offshore team.
  • Work on multiple reports validation against the databases by running medium to complex SQL queries.
  • Better understanding of Automation Objects and Integrations across various platforms/applications etc.
  • Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
  • Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
  • Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
  • Establish processes and tools set to maintain automation scripts and generate regular test reports.
  • Peer review to provide feedback and to make sure the test scripts are flaw-less.

Core/Must have skills:

  • Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
  • Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
  • Clear & crisp communication and commitment towards deliverables
  • Experience on BigData Testing will be an added advantage.
  • Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.

Good to have skills:

  • Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
  • Ability to effectively articulate technical challenges and solutions
  • Work experience in qTest, Jira, WebDriver.IO


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Remote only
10 - 15 yrs
₹10L - ₹18L / yr
Denodo
Data Visualization
Data integration
ETL

Responsibilities

·        Design and architect data virtualization solutions using Denodo.

·        Collaborate with business analysts and data engineers to understand data requirements and translate them into technical specifications.

·        Implement best practices for data governance and security within Denodo environments.

·        Lead the integration of Denodo with various data sources, ensuring performance optimization.

·        Conduct training sessions and provide guidance to technical teams on Denodo capabilities.

·        Participate in the evaluation and selection of data technologies and tools.

·        Stay current with industry trends in data integration and virtualization.

 

Requirements

·        Bachelor's degree in Computer Science, Information Technology, or a related field.

·        10+ years of experience in data architecture, with a focus on Denodo solutions.

·        Strong knowledge of data virtualization principles and practices.

·        Experience with SQL and data modeling techniques.

·        Familiarity with ETL processes and data integration tools.

·        Excellent communication and presentation skills.

·        Ability to lead technical discussions and provide strategic insights.

·        Certifications related to Denodo or data architecture are a plus


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Bengaluru (Bangalore)
14 - 21 yrs
Best in industry
skill iconPython
snowflake
Amazon Redshift
ETL
SQL
+3 more

Role: Data Engineer (14+ years of experience)

Location: Whitefield, Bangalore

Mode of Work: Hybrid (3 days from office)

Notice period: Immediate/ Serving with 30days left

Location: Candidate should be based out of Bangalore as one round has to be taken F2F


Job Summary:

Role and Responsibilities

● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.

● Design data models to support data analysis and reporting.

● Automate data engineering tasks using scripting languages and tools.

● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.

● Act as a bridge between the engineering and the business team in all areas related to Data.

● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.

● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.

● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards

● QA environment data management - e.g Test Data Management etc

Qualifications

● 14+ years of experience as a Data engineer or related role.

● Experience with Agile engineering practices.

● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.

● Experience with SQL and NoSQL databases.

● Ability to work independently or as part of a team.

● Experience with cloud platforms, preferably AWS.

● Strong experience with data warehousing and data lake technologies (Snowflake)

● Expertise in data modelling

● Experience with ETL/LT tools and methodologies .

● 5+ years of experience in application development including Python, SQL, Scala, or Java

● Experience working on real-time Data Streaming and Data Streaming platform.


NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Gurugram
6 - 8 yrs
₹8L - ₹22L / yr
Automation
skill iconDocker
SQL
skill iconAmazon Web Services (AWS)
azure
+4 more

Role: Automation Tester – Data Engineering

Experience: 6+ years

Work Mode: Hybrid (2–3 days onsite/week)

Locations: Gurgaon

Notice Period: Immediate Joiners Preferred


Mandatory Skills:

  • Hands-on automation testing experience in Data Engineering or Data Warehousing
  • Proficiency in Docker
  • Experience working on any Cloud platform (AWS, Azure, or GCP)
  • Experience in ETL Testing is a must
  • Automation testing using Pytest or Scalatest
  • Strong SQL skills and data validation techniques
  • Familiarity with data processing tools such as ETL, Hadoop, Spark, Hive
  • Sound knowledge of SDLC and Agile methodologies
  • Ability to write efficient, clean, and maintainable test scripts
  • Strong problem-solving, debugging, and communication skills


Good to Have:

  • Exposure to additional test frameworks like Selenium, TestNG, or JUnit


Key Responsibilities:

  • Develop, execute, and maintain automation scripts for data pipelines
  • Perform comprehensive data validation and quality assurance
  • Collaborate with data engineers, developers, and stakeholders
  • Troubleshoot issues and improve test reliability
  • Ensure consistent testing standards across development cycles
Read more
Enqubes

Enqubes

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Remote only
7 - 10 yrs
₹10L - ₹15L / yr
SAP BODS
SAP HANA
HANA
ETL management
ETL
+3 more

Job Title: SAP BODS Developer

  • Experience: 7–10 Years
  • Location: Remote (India-based candidates only)
  • Employment Type: Permanent (Full-Time)
  • Salary Range: ₹20 – ₹25 LPA (Fixed CTC)


Required Skills & Experience:

- 7–10 years of hands-on experience as a SAP BODS Developer.

- Strong experience in S/4HANA implementation or upgrade projects with large-scale data migration.

- Proficient in ETL development, job optimization, and performance tuning using SAP BODS.

- Solid understanding of SAP data structures (FI, MM, SD, etc.) from a technical perspective.

- Skilled in SQL scripting, error resolution, and job monitoring.

- Comfortable working independently in a remote, spec-driven development environment.


Read more
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀


Job description

🔍 Job Title: Data Engineer

📍 Location: Ahmedabad

🚀 Work Mode: On-Site Opportunity

📅 Experience: 4+ Years

🕒 Employment Type: Full-Time

⏱️ Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
Hypersonix Inc

at Hypersonix Inc

2 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
7yrs+
Upto ₹40L / yr (Varies
)
SQL
skill iconPython
ETL
Data engineering
Big Data
+2 more

About the Company

Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.


About the Role

We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.


Roles and Responsibilities

  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
  • Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
  • Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader


Requirements

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Hyderabad
5 - 10 yrs
₹6L - ₹15L / yr
ETL
ELT
SQL
PL/SQL
Siebel CRM
+3 more

Position Summary:

As a CRM ETL Developer, you will be responsible for the analysis, transformation, and integration of data from legacy and external systems into CRM application. This includes developing ETL/ELT workflows, ensuring data quality through cleansing and survivorship rules, and supporting daily production loads. You will work in an Agile environment and play a vital role in building scalable, high-quality data integration solutions.

Key Responsibilities:

  • Analyze data from legacy and external systems; develop ETL/ELT pipelines to ingest and process data.
  • Cleanse, transform, and apply survivorship rules before loading into the CRM platform.
  • Monitor, support, and troubleshoot production data loads (Tier 1 & Tier 2 support).
  • Contribute to solution design, development, integration, and scaling of new/existing systems.
  • Promote and implement best practices in data integration, performance tuning, and Agile development.
  • Lead or support design reviews, technical documentation, and mentoring of junior developers.
  • Collaborate with business analysts, QA, and cross-functional teams to resolve defects and clarify requirements.
  • Deliver working solutions via quick POCs or prototypes for business scenarios.

Technical Skills:

  • ETL/ELT Tools: 5+ years of hands-on experience in ETL processes using Siebel EIM.
  • Programming & Databases: Strong SQL & PL/SQL development; experience with Oracle and/or SQL Server.
  • Data Integration: Proven experience in integrating disparate data systems.
  • Data Modelling: Good understanding of relational, dimensional modelling, and data warehousing concepts.
  • Performance Tuning: Skilled in application and SQL query performance optimization.
  • CRM Systems: Familiarity with Siebel CRM, Siebel Data Model, and Oracle SOA Suite is a plus.
  • DevOps & Agile: Strong knowledge of DevOps pipelines and Agile methodologies.
  • Documentation: Ability to write clear technical design documents and test cases.

Soft Skills & Attributes:

  • Strong analytical and problem-solving skills.
  • Excellent communication and interpersonal abilities.
  • Experience working with cross-functional, globally distributed teams.
  • Proactive mindset and eagerness to learn new technologies.
  • Detail-oriented with a focus on reliability and accuracy.

Preferred Qualifications:

  • Bachelor’s degree in Computer Science, Information Systems, or a related field.
  • Experience in Tier 1 & Tier 2 application support roles.
  • Exposure to real-time data integration systems is an advantage.


Read more
Metric Vibes

Metric Vibes

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Noida
4 - 8 yrs
₹10L - ₹15L / yr
PowerBI
skill iconJavascript
RESTful APIs
Embedded software
SQL
+9 more

Job Title: Tableau BI Developer

Years of Experience: 4-8Yrs

12$ per hour fte engagement

8 hrs. working


Required Skills & Experience:

✅ 4–8 years of experience in BI development and data engineering

✅ Expertise in BigQuery and/or Snowflake for large-scale data processing

✅ Strong SQL skills with experience writing complex analytical queries

✅ Experience in creating dashboards in tools like Power BI, Looker, or similar

✅ Hands-on experience with ETL/ELT tools and data pipeline orchestration

✅ Familiarity with cloud platforms (GCP, AWS, or Azure)

✅ Strong understanding of data modeling, data warehousing, and analytics best practices

✅ Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders

Read more
Pulsedata Labs Pvt Ltd

Pulsedata Labs Pvt Ltd

Agency job
Remote only
5 - 7 yrs
₹20L - ₹30L / yr
databricks
Spark
pythonspark
SQL server
ETL
+2 more

Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA)


About URUS

We are the URUS family (US), a global leader in products and services for Agritech.


SENIOR DATA ENGINEER

This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles.


Responsibilities

Data Integration

  • Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks.
  • Troubleshoot and resolve Databricks pipeline errors and performance issues.
  • Maintain legacy SSIS packages for ETL processes.
  • Troubleshoot and resolve SSIS package errors and performance issues.
  • Optimize data flow performance and minimize data latency.
  • Implement data quality checks and validations within ETL processes.

Databricks Development

  • Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL.
  • Migrate legacy SSIS packages to Databricks pipelines.
  • Optimize Databricks jobs for performance and cost-effectiveness.
  • Integrate Databricks with other data sources and systems.
  • Participate in the design and implementation of data lake architectures.

Data Warehousing

  • Participate in the design and implementation of data warehousing solutions.
  • Support data quality initiatives and implement data cleansing procedures.

Reporting and Analytics

  • Collaborate with business users to understand data requirements for department driven reporting needs.
  • Maintain existing library of complex SSRS reports, dashboards, and visualizations.
  • Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies.

Collaboration and Communication

  • Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams.
  • Collaborate effectively with business users, data analysts, and other IT teams.
  • Communicate technical information clearly and concisely, both verbally and in writing.
  • Document all development work and procedures thoroughly.

Continuous Growth

  • Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies.
  • Continuously improve skills and knowledge through training and self-learning.

This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned.


Requirements

  • Bachelor's degree in computer science, Information Systems, or a related field.
  • 7+ years of experience in data integration and reporting.
  • Extensive experience with Databricks, including Python, Spark, and Delta Lake.
  • Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions.
  • Experience with SSIS (SQL Server Integration Services) development and maintenance.
  • Experience with SSRS (SQL Server Reporting Services) report design and development.
  • Experience with data warehousing concepts and best practices.
  • Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable.
  • Strong analytical and problem-solving skills.
  • Excellent communication and interpersonal skills.
  • Ability to work independently and as part of a team.
  • Experience with Agile methodologies.



Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Hyderabad
6 - 10 yrs
₹10L - ₹20L / yr
IBM Cognos BI
IBM Cognos Framework Manager
Cognos Dashboarding
SQL
Data modeling
+8 more

Job Title : Cognos BI Developer

Experience : 6+ Years

Location : Bangalore / Hyderabad (Hybrid)

Notice Period : Immediate Joiners Preferred (Candidates serving notice with 10–15 days left can be considered)

Interview Mode : Virtual


Job Description :

We are seeking an experienced Cognos BI Developer with strong data modeling, dashboarding, and reporting expertise to join our growing team. The ideal candidate should have a solid background in business intelligence, data visualization, and performance analysis, and be comfortable working in a hybrid setup from Bangalore or Hyderabad.


Mandatory Skills :

Cognos BI, Framework Manager, Cognos Dashboarding, SQL, Data Modeling, Report Development (charts, lists, cross tabs, maps), ETL Concepts, KPIs, Drill-through, Macros, Prompts, Filters, Calculations.



Key Responsibilities :

  1. Understand business requirements in the BI context and design data models using Framework Manager to transform raw data into meaningful insights.
  2. Develop interactive dashboards and reports using Cognos Dashboard.
  3. Identify and define KPIs and create reports to monitor them effectively.
  4. Analyze data and present actionable insights to support business decision-making.
  5. Translate business requirements into technical specifications and determine timelines for execution.
  6. Design and develop models in Framework Manager, publish packages, manage security, and create reports based on these packages.
  7. Develop various types of reports, including charts, lists, cross tabs, and maps, and design dashboards combining multiple reports.
  8. Implement reports using macros, prompts, filters, and calculations.
  9. Perform data warehouse development activities and ensure seamless data flow.
  10. Write and optimize SQL queries to investigate data and resolve performance issues.
  11. Utilize Cognos features such as master-detail reports, drill-throughs, bookmarks, and page sets.
  12. Analyze and improve ETL processes to enhance data integration.
  13. Apply technical enhancements to existing BI systems to improve their performance and usability.
  14. Possess solid understanding of database fundamentals, including relational and multidimensional database design.
  15. Hands-on experience with Cognos Data Modules (data modeling) and dashboarding.
Read more
A leader in telecom, fintech, AI-led marketing automation.

A leader in telecom, fintech, AI-led marketing automation.

Agency job
via Infinium Associate by Toshi Srivastava
Bengaluru (Bangalore)
9 - 15 yrs
₹25L - ₹35L / yr
MERN Stack
skill iconPython
skill iconMongoDB
Spark
Hadoop
+7 more

We are looking for a talented MERN Developer with expertise in MongoDB/MySQL, Kubernetes, Python, ETL, Hadoop, and Spark. The ideal candidate will design, develop, and optimize scalable applications while ensuring efficient source code management and implementing Non-Functional Requirements (NFRs).


Key Responsibilities:

  • Develop and maintain robust applications using MERN Stack (MongoDB, Express.js, React.js, Node.js).
  • Design efficient database architectures (MongoDB/MySQL) for scalable data handling.
  • Implement and manage Kubernetes-based deployment strategies for containerized applications.
  • Ensure compliance with Non-Functional Requirements (NFRs), including source code management, development tools, and security best practices.
  • Develop and integrate Python-based functionalities for data processing and automation.
  • Work with ETL pipelines for smooth data transformations.
  • Leverage Hadoop and Spark for processing and optimizing large-scale data operations.
  • Collaborate with solution architects, DevOps teams, and data engineers to enhance system performance.
  • Conduct code reviews, troubleshooting, and performance optimization to ensure seamless application functionality.


Required Skills & Qualifications:

  • Proficiency in MERN Stack (MongoDB, Express.js, React.js, Node.js).
  • Strong understanding of database technologies (MongoDB/MySQL).
  • Experience working with Kubernetes for container orchestration.
  • Hands-on knowledge of Non-Functional Requirements (NFRs) in application development.
  • Expertise in Python, ETL pipelines, and big data technologies (Hadoop, Spark).
  • Strong problem-solving and debugging skills.
  • Knowledge of microservices architecture and cloud computing frameworks.

Preferred Qualifications:

  • Certifications in cloud computing, Kubernetes, or database management.
  • Experience in DevOps, CI/CD automation, and infrastructure management.
  • Understanding of security best practices in application development.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
VenkataRamanan S
Posted by VenkataRamanan S
Mumbai
4 - 8 yrs
₹15L - ₹25L / yr
skill iconPython
SQL
ETL

What We’re Looking For:

  • Strong experience in Python (3+ years).
  • Hands-on experience with any database (SQL or NoSQL).
  • Experience with frameworks like Flask, FastAPI, or Django.
  • Knowledge of ORMs, API development, and unit testing.
  • Familiarity with Git and Agile methodologies.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sruthy VS
Posted by Sruthy VS
Bengaluru (Bangalore), Mumbai
4 - 8 yrs
Best in industry
Snow flake schema
ETL
SQL
skill iconPython
  • Strong Snowflake Cloud database experience Database developer.
  • Knowledge of Spark and Databricks is desirable.
  • Strong technical background in data modelling, database design and optimization for data warehouses, specifically on column oriented MPP architecture 
  • Familiar with technologies relevant to data lakes such as Snowflake
  • Candidate should have strong ETL & database design/modelling skills. 
  • Experience creating data pipelines
  • Strong SQL skills and debugging knowledge and Performance Tuning exp.
  • Experience with Databricks / Azure is add on /good to have . 
  • Experience working with global teams and global application environments
  • Strong understanding of SDLC methodologies with track record of high quality deliverables and data quality, including detailed technical design documentation desired

 

 

 

Read more
Indigrators solutions
Hyderabad
5 - 9 yrs
₹10L - ₹25L / yr
SAP PP/PI
SAP PI
API'S
ETL
ELT
+2 more

The role reports to the Head of Customer Support, and the position holder is part of the Product Team.

 

Main objectives of the role

·    Focus on customer satisfaction with the product and provide the first-line support.

 

Specialisation

·    Customer Support

·    SaaS

·    FMCG/CPG

 

Key processes in the role

·    Build extensive knowledge of our SAAS product platform and support our customers in using it.

·    Supporting end customers with complex questions.

·    Providing extended and elaborated answers on business & “how to” questions from customers.

·    Participating in ongoing education for Customer Support Managers.

·    Collaborate and communicate with the Development teams, Product Support and Customers

 

Requirements

·    Bachelor’s degree in business, IT, Engineering or Economics.

·    4-8 years of experience in a similar role in the IT Industry.

·    Solid knowledge of SaaS (Software as a Service).

·    Multitasking is your second nature, and you have a proactive + Customer First mindset.

 

·    3+ years of experience providing support for ERP systems, preferably SAP.

·    Familiarity with ERP/SAP integration processes and data migration.

·    Understanding of ERP/SAP functionalities, modules and data structures.

·    Understanding of technicalities like Integrations (API’s, ETL, ELT), analysing logs, identifying errors in logs, etc.

·    Experience in looking into code, changing configuration, and analysing if it's a development bug or a product bug.

 

·    Profound understanding of the support processes.

·    Should know where to route tickets further and know how to manage customer escalations.

·    Outstanding customer service skills.

·    Knowledge of Fast-Moving Consumer Goods (FMCG)/ Consumer Packaged Goods (CPG) industry/domain is preferable.

Excellent verbal and written communication skills in the English language

Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Noida
5 - 9 yrs
₹40L - ₹60L / yr
skill iconPython
SQL
Data engineering
Snowflake
ETL
+5 more

About the Role:

We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 

 

 

Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.


Read more
NeoGenCode Technologies Pvt Ltd
Pune
8 - 15 yrs
₹5L - ₹24L / yr
Data engineering
Snow flake schema
SQL
ETL
ELT
+5 more

Job Title : Data Engineer – Snowflake Expert

Location : Pune (Onsite)

Experience : 10+ Years

Employment Type : Contractual

Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.


Job Summary :

We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.

The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.

Responsibilities :

  • Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
  • Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
  • Ensure high data quality, security, and adherence to governance frameworks.
  • Conduct code reviews and align development with best practices.

Qualifications :

  • Bachelor’s in Computer Science, Data Science, IT, or related field.
  • Snowflake certifications (Pro/Architect) preferred.
Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Noida
5 - 8 yrs
₹25L - ₹40L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
ETL
+6 more

About the Role:

We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.


Read more
Hyderabad
5 - 8 yrs
₹24L - ₹30L / yr
Apache Kafka
skill iconElastic Search
skill iconNodeJS (Node.js)
ETL
skill iconPython
+2 more

Company Overview

We are a dynamic startup dedicated to empowering small businesses through innovative technology solutions. Our mission is to level the playing field for small businesses by providing them with powerful tools to compete effectively in the digital marketplace. Join us as we revolutionize the way small businesses operate online, bringing innovation and growth to local communities.


Job Description

We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will develop systems on cloud platforms capable of processing millions of interactions daily, leveraging the latest cloud computing and machine learning technologies while creating custom in-house data solutions. The ideal candidate should have hands-on experience with SQL, PL/SQL, and any standard ETL tools. You must be able to thrive in a fast-paced environment and possess a strong passion for coding and problem-solving.


Required Skills and Experience

  • Minimum 5 years of experience in software development.
  • 3+ years of experience in data management and SQL expertise – PL/SQL, Teradata, and Snowflake experience strongly preferred.
  • Expertise in big data technologies such as Hadoop, HiveQL, and Spark (Scala/Python).
  • Expertise in cloud technologies – AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR).
  • Experience with queuing systems (e.g., SQS, Kafka) and caching systems (e.g., Ehcache, Memcached).
  • Experience with container management tools (e.g., Docker Swarm, Kubernetes).
  • Familiarity with data stores, including at least one of the following: Postgres, MongoDB, Cassandra, or Redis.
  • Ability to create advanced visualizations and dashboards to communicate complex findings (e.g., Looker Studio, Power BI, Tableau).
  • Strong skills in manipulating and transforming complex datasets for in-depth analysis.
  • Technical proficiency in writing code in Python and advanced SQL queries.
  • Knowledge of AI/ML infrastructure, best practices, and tools is a plus.
  • Experience in analyzing and resolving code issues.
  • Hands-on experience with software architecture concepts such as Separation of Concerns (SoC) and micro frontends with theme packages.
  • Proficiency with the Git version control system.
  • Experience with Agile development methodologies.
  • Strong problem-solving skills and the ability to learn quickly.
  • Exposure to Docker and Kubernetes.
  • Familiarity with AWS or other cloud platforms.


Responsibilities

  • Develop and maintain our inhouse search and reporting platform
  • Create data solutions to complement core products to improve performance and data quality
  • Collaborate with the development team to design, develop, and maintain our suite of products.
  • Write clean, efficient, and maintainable code, adhering to coding standards and best practices.
  • Participate in code reviews and testing to ensure high-quality code.
  • Troubleshoot and debug application issues as needed.
  • Stay up-to-date with emerging trends and technologies in the development community.


How to apply?

  • If you are passionate about designing user-centric products and want to be part of a forward-thinking company, we would love to hear from you. Please send your resume, a brief cover letter outlining your experience and your current CTC (Cost to Company) as a part of the application.


Join us in shaping the future of e-commerce!

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Monika Sekaran
Posted by Monika Sekaran
Bengaluru (Bangalore)
7 - 12 yrs
Best in industry
skill iconPython
ETL
SQL
Databases
Informatica
+2 more
  • A bachelor’s degree in Computer Science or a related field.
  • 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies. 
  • Worked extensively on data integration, designing, and developing reusable interfaces Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling.
  • Expert level understanding of data warehouse, core database concepts and relational database design.
  • Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset.
  • Proven track record of delivering results 
  • Proven analytical skills and experience making decisions based on hard and soft data
  • A desire and openness to learning and continuous improvement, both of yourself and your team members.
  • Hands-on experience on development of APIs is a plus 
  • Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required
  • Familiarity with Postgres and Python is a plus


Read more
Deqode

at Deqode

1 recruiter
Mokshada Solanki
Posted by Mokshada Solanki
Bengaluru (Bangalore), Mumbai, Pune, Gurugram
4 - 5 yrs
₹4L - ₹20L / yr
SQL
skill iconAmazon Web Services (AWS)
Migration
PySpark
ETL

Job Summary:

Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.


Key Responsibilities:

  • Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
  • Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
  • Work on data migration tasks in AWS environments.
  • Monitor and improve database performance; automate key performance indicators and reports.
  • Collaborate with cross-functional teams to support data integration and delivery requirements.
  • Write shell scripts for automation and manage ETL jobs efficiently.


Required Skills:

  • Strong experience with MySQL, complex SQL queries, and stored procedures.
  • Hands-on experience with AWS Glue, PySpark, and ETL processes.
  • Good understanding of AWS ecosystem and migration strategies.
  • Proficiency in shell scripting.
  • Strong communication and collaboration skills.


Nice to Have:

  • Working knowledge of Python.
  • Experience with AWS RDS.



Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune, Mumbai, Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹5L - ₹10L / yr
ETL
SQL
skill iconAmazon Web Services (AWS)
PySpark
KPI

Role - ETL Developer

Work ModeHybrid

Experience- 4+ years

Location - Pune, Gurgaon, Bengaluru, Mumbai

Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL

Required Skills:

  • 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
  • Experience in Pyspark, AWS, AWS Glue
  • Experience in AWS ,Migration
  • Experience with automated scripting and tracking KPIs/metrics for database performance
  • Proficiency in shell scripting and ETL.
  • Strong communication skills and a collaborative team player
  • Knowledge of Python and AWS RDS is a plus


Read more
Mega Style Apartments

at Mega Style Apartments

2 candid answers
Mega Style Apartments Hiring Team
Posted by Mega Style Apartments Hiring Team
Remote only
0 - 20 yrs
₹2L - ₹4L / yr
skill iconNextJs (Next.js)
Frontend
User Experience (UX) Design
skill iconReact.js
Database Design
+13 more

TL;DR

Founding Software Engineer (Next.js / React / TypeScript) — ₹17,000–₹24,000 net ₹/mo — 100% remote (India) — ~40 h/wk — green-field stack, total autonomy, ship every week. If you can own the full lifecycle and prove impact every Friday, apply.



🏢 Mega Style Apartments

We rent beautifully furnished 1- to 4-bedroom flats that feel like home but run like a hotel—so travellers can land, unlock the door, and live like locals from hour one. Tech is now the growth engine, and you’ll be employee #1 in engineering, laying the cornerstone for a tech platform that will redefine the premium furnished apartment experience.



✨ Why This Role Rocks


💡 Green-field Everything

Choose the stack, CI, even the linter.


🎯 Visible Impact & Ambition

Every deploy reaches real guests this week. Lay rails for ML that can boost revenue 20%.


⏱️ Radical Autonomy

Plan sprints, own deploys; no committees.

  • Direct line to decision-makers → zero red tape
  • Modern DX: Next.js & React (latest stable), Tailwind, Prisma/Drizzle, Vercel, optional AI copilots – building mostly server-rendered, edge-ready flows.
  • Async-first, with structured weekly 1-on-1s to ensure you’re supported, not micromanaged.
  • Unmatched Career Acceleration: Build an entire tech foundation from zero, making decisions that will define your trajectory and our company's success.


🗓️ Your Daily Rhythm

  • Morning: Check metrics, pick highest-impact task
  • Day: Build → ship → measure
  • Evening: 10-line WhatsApp update (done, next, blockers)
  • Friday: Live demo of working software (no mock-ups)


📈 Success Milestones

  • Week 1: First feature in production
  • Month 1: Automation that saves ≥10 h/week for ops
  • Month 3: Core platform stable; conversion up, load times down (aiming for <1s LCP); ready for future ML pricing (stretch goal: +20% revenue within 12 months).


🔑 What You’ll Own

  1. Ship guest-facing features with Next.js (App Router / RSC / Server Actions).
  2. Automate ops—dashboards & LLM helpers that delete busy-work.
  3. Full lifecycle: idea → spec → code → deploy → measure → iterate.
  4. Set up CI/CD & observability on Vercel; a dedicated half-day refactor slot each sprint keeps tech-debt low.
  5. Optimise for outcomes—conversion, CWV, security, reliability; laying the groundwork for future capabilities in dynamic pricing and guest personalization.

Prototype > promise. Results > hours-in-chair.



💻 Must-Have Skills


Frontend Focus:

  • Next.js (App Router/RSC/Server Actions)
  • React (latest stable), TypeScript
  • Tailwind CSS + shadcn/ui
  • State mgmt (TanStack Query / Zustand / Jotai)

Backend & DevOps Focus:

  • Node.js APIs, Prisma/Drizzle ORM
  • Solid SQL schema design (e.g., PostgreSQL)
  • Auth.js / Better-Auth, web security best practices
  • GitHub Flow, automated tests, CI, Vercel deploys
  • Excellent English; explain trade-offs to non-tech peers
  • Self-starter—comfortable as the engineer (for now)


🌱 Nice-to-Haves (Learn Here or Teach Us)

A/B testing & CRO, Python/basic ML, ETL pipelines, Advanced SEO & CWV, Payment APIs (Stripe, Merchant Warrior), n8n automation


🎁 Perks & Benefits

  • 100% remote anywhere in 🇮🇳
  • Flexible hours (~40 h/wk)
  • 12 paid days off (holiday + sick)
  • ₹1,700/mo health insurance reimbursement (post-probation)
  • Performance bonuses for measurable wins
  • 6-month paid probation → permanent role & full benefits (this is a full-time employment role)
  • Blank-canvas stack—your decisions live on
  • Equity is not offered at this time; we compensate via performance bonuses and a clear path for growth, with future leadership opportunities as the company and engineering team scales.


⏩ Hiring Process (7–10 Days, Fast & Fair)

All stages are async & remote.

  1. Apply: 5-min form + short quiz (approx. 15 min total)
  2. Test 1: TypeScript & logic (1 h)
  3. Test 2: Next.js / React / Node / SQL deep-dive (1 h)
  4. Final: AI Video interview (1 h)

.

🚫 Who Shouldn’t Apply

  • Need daily hand-holding
  • Prefer consensus to decisions
  • Chase perfect code over shipped value
  • “Move fast & learn” culture feels scary



🚀 Ready to Own the Stack?

If you read this and thought “Finally—no bureaucracy,” and you're ready to set the technical standard for a growing company, show us something you’ve built and apply here →

Read more
TechMynd Consulting

at TechMynd Consulting

2 candid answers
Suraj N
Posted by Suraj N
Bengaluru (Bangalore), Gurugram, Mumbai
4 - 8 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconPostgreSQL
skill iconPython
Apache
skill iconAmazon Web Services (AWS)
+5 more

Senior Data Engineer


Location: Bangalore, Gurugram (Hybrid)


Experience: 4-8 Years


Type: Full Time | Permanent


Job Summary:


We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.


This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.


Key Responsibilities:


PostgreSQL & Data Modeling


· Design and optimize complex SQL queries, stored procedures, and indexes


· Perform performance tuning and query plan analysis


· Contribute to schema design and data normalization


Data Migration & Transformation


· Migrate data from multiple sources to cloud or ODS platforms


· Design schema mapping and implement transformation logic


· Ensure consistency, integrity, and accuracy in migrated data


Python Scripting for Data Engineering


· Build automation scripts for data ingestion, cleansing, and transformation


· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)


· Maintain reusable script modules for operational pipelines


Data Orchestration with Apache Airflow


· Develop and manage DAGs for batch/stream workflows


· Implement retries, task dependencies, notifications, and failure handling


· Integrate Airflow with cloud services, data lakes, and data warehouses


Cloud Platforms (AWS / Azure / GCP)


· Manage data storage (S3, GCS, Blob), compute services, and data pipelines


· Set up permissions, IAM roles, encryption, and logging for security


· Monitor and optimize cost and performance of cloud-based data operations


Data Marts & Analytics Layer


· Design and manage data marts using dimensional models


· Build star/snowflake schemas to support BI and self-serve analytics


· Enable incremental load strategies and partitioning


Modern Data Stack Integration


· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka


· Support modular pipeline design and metadata-driven frameworks


· Ensure high availability and scalability of the stack


BI & Reporting Tools (Power BI / Superset / Supertech)


· Collaborate with BI teams to design datasets and optimize queries


· Support development of dashboards and reporting layers


· Manage access, data refreshes, and performance for BI tools




Required Skills & Qualifications:


· 4–6 years of hands-on experience in data engineering roles


· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)


· Advanced Python scripting skills for automation and ETL


· Proven experience with Apache Airflow (custom DAGs, error handling)


· Solid understanding of cloud architecture (especially AWS)


· Experience with data marts and dimensional data modeling


· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)


· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI


· Version control (Git) and CI/CD pipeline knowledge is a plus


· Excellent problem-solving and communication skills

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
8 - 12 yrs
₹12L - ₹25L / yr
ETL
SQL
Snow flake schema
  • 8-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience in Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management tool is good to have.
  • Exposure to the financial domain knowledge is considered a plus.
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus


Read more
Quanteon Solutions
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
3 - 5 yrs
₹8L - ₹20L / yr
SQL
skill iconPostgreSQL
Databases
ETL
Data Visualization
+8 more

We’re looking for an experienced SQL Developer with 3+ years of hands-on experience to join our growing team. In this role, you’ll be responsible for designing, developing, and maintaining SQL queries, procedures, and data systems that support our business operations and decision-making processes. You should be passionate about data, highly analytical, and capable of working both independently and collaboratively with cross-functional teams.


Key Responsibilities:


Design, develop, and maintain complex SQL queries, stored procedures, functions, and views.

Optimize existing queries for performance and efficiency.

Collaborate with data analysts, developers, and stakeholders to understand requirements and translate them into robust SQL solutions.

Design and implement ETL processes to move and transform data between systems.

Perform data validation, troubleshooting, and quality checks.

Maintain and improve existing databases, ensuring data integrity, security, and accessibility.

Document code, processes, and data models to support scalability and maintainability.

Monitor database performance and provide recommendations for improvement.

Work with BI tools and support dashboard/report development as needed.


Requirements:

3+ years of proven experience as an SQL Developer or in a similar role.

Strong knowledge of SQL and relational database systems (e.g., MS SQL Server, PostgreSQL, MySQL, Oracle).

Experience with performance tuning and optimization.

Proficiency in writing complex queries and working with large datasets.

Experience with ETL tools and data pipeline creation.

Familiarity with data warehousing concepts and BI reporting.

Solid understanding of database security, backup, and recovery.

Excellent problem-solving skills and attention to detail.

Good communication skills and ability to work in a team environment.


Nice to Have:


Experience with cloud-based databases (AWS RDS, Google BigQuery, Azure SQL).

Knowledge of Python, Power BI, or other scripting/analytics tools.

Experience working in Agile or Scrum environments.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹17L / yr
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

Job Summary:

We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.

Key Responsibilities:

  • Assist in the design, development, and maintenance of scalable and efficient data pipelines.
  • Write clean, maintainable, and performance-optimized SQL queries.
  • Develop data transformation scripts and automation using Python.
  • Support data ingestion processes from various internal and external sources.
  • Monitor data pipeline performance and help troubleshoot issues.
  • Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
  • Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
  • Document technical processes and pipeline architecture.

Core Skills Required:

  • Proficiency in SQL (data querying, joins, aggregations, performance tuning).
  • Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
  • Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
  • Understanding of relational databases and data warehouse concepts.
  • Familiarity with version control systems like Git.

Preferred Qualifications:

  • Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
  • Familiarity with data modeling and data integration concepts.
  • Basic knowledge of CI/CD practices for data pipelines.
  • Bachelor’s degree in Computer Science, Engineering, or related field.


Read more
Zafin
Agency job
via hirezyai by HR Hirezyai
Thiruvananthapuram
10 - 12 yrs
₹18L - ₹20L / yr
SQL
DAX
ADF
ETL

Founded in 2002, Zafin offers a SaaS product and pricing platform that simplifies core modernization for top banks worldwide. Our platform enables business users to work collaboratively to design and manage pricing, products, and packages, while technologists streamline core banking systems. 

With Zafin, banks accelerate time to market for new products and offers while lowering the cost of change and achieving tangible business and risk outcomes. The Zafin platform increases business agility while enabling personalized pricing and dynamic responses to evolving customer and market needs. 

Zafin is headquartered in Vancouver, Canada, with offices and customers around the globe including ING, CIBC, HSBC, Wells Fargo, PNC, and ANZ. Zafin is proud to be recognized as a top employer and certified Great Place to Work® in Canada, India and the UK. 

 

Job Summary: 

We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join our team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making. 


Key Responsibilities: 

· Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems. 

· SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy. 

· Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability. 

· Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements. 

· DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports. 

· Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization. 

· Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting: 

· Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation. 

 

Skills and Qualifications: 


Technical Expertise: 

· Proficient in Azure Data Factory for building ETL pipelines and managing data flows. 

· Strong experience with SQL, including query optimization and data transformation. 

· Knowledge of Azure Analysis Services for data modelling 

· Advanced Power BI skills, including DAX, report development, and data modelling. 

· Familiarity with Microsoft Fabric and Azure Analytics (a plus) 

· Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively 


Communication Skills: 

· Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders. 

· Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline 


What’s in it for you 

Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement.


Read more
Agivant Technologies

Agivant Technologies

Agency job
via Vidpro Consultancy Services by ashik thahir
Remote only
5 - 10 yrs
₹18L - ₹25L / yr
skill iconPython
SQL
Airflow
Snowflake
skill iconElastic Search
+3 more

Experience: 5-8 Years

Work Mode: Remote

Job Type: Fulltime

Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.


Role Overview:

We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.


Responsibilities:

  • Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
  • Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
  • Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
  • Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
  • Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
  • Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
  • Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
  • Contribute to the development and enhancement of our data warehouse architecture

Required Skills:

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
  • At least 3+ years of exp in Snowflake data warehousing technologies.
  • At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
  • Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
  • Working experience with Elastic Search and its application in data pipelines.
  • Proficiency in SQL and experience with data modelling techniques.
  • Strong understanding of cloud-based data storage solutions such as AWS S3.
  • Experience working with NFS and other file storage systems.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹25L / yr
ETL
SQL
Apache Spark
Apache Kafka

Role & Responsibilities

About the Role:


We are seeking a highly skilled Senior Data Engineer with 5-7 years of experience to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in data warehouse architecture, data modeling, ETL processes, and building both batch and streaming pipelines. The candidate should also possess advanced proficiency in Spark, Databricks, Kafka, Python, SQL, and Change Data Capture (CDC) methodologies.

Key responsibilities:


Design, develop, and maintain robust data warehouse solutions to support the organization's analytical and reporting needs.

Implement efficient data modeling techniques to optimize performance and scalability of data systems.

Build and manage data lakehouse infrastructure, ensuring reliability, availability, and security of data assets.

Develop and maintain ETL pipelines to ingest, transform, and load data from various sources into the data warehouse and data lakehouse.

Utilize Spark and Databricks to process large-scale datasets efficiently and in real-time.

Implement Kafka for building real-time streaming pipelines and ensure data consistency and reliability.

Design and develop batch pipelines for scheduled data processing tasks.

Collaborate with cross-functional teams to gather requirements, understand data needs, and deliver effective data solutions.

Perform data analysis and troubleshooting to identify and resolve data quality issues and performance bottlenecks.

Stay updated with the latest technologies and industry trends in data engineering and contribute to continuous improvement initiatives.

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Mumbai
1 - 2 yrs
₹6L - ₹8L / yr
ETL
SQL
NOSQL Databases
RESTful APIs
Troubleshooting
+8 more

Profile: Product Support Engineer

🔴 Experience: 1 year as Product Support Engineer.

🔴 Location: Mumbai (Andheri).

🔴 5 days of working from office.


Skills Required:

🔷 Experience in providing support for ETL or data warehousing is preferred.

🔷 Good Understanding on Unix and Databases concepts.

🔷 Experience working with SQL and No-SQL databases and writing simple

queries to get data for debugging issues.

🔷 Being able to creatively come up with solutions for various problems and

implement them.

🔷 Experience working with REST APIs and debugging requests and

responses using tools like Postman.

🔷 Quick troubleshooting and diagnosing skills.

🔷 Knowledge of customer success processes.

🔷 Experience in document creation.

🔷 High availability for fast response to customers.

🔷 Language knowledge required in one of NodeJs, Python, Java.

🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.

🔷 Experience in SAAS B2B software companies - an advantage.

🔷 Ability to join the dots around multiple events occurring concurrently and

spot patterns.


Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
Best in industry
ETL
Snow flake schema

Job Description for QA Engineer:

  • 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus

Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution


Read more
Enlink Managed Services Pvt Ltd
Kolkata
5 - 8 yrs
₹15L - ₹25L / yr
ETL
SAS
Informatica PowerCenter
IBM InfoSphere DataStage
Talend
+2 more

Senior ETL developer in SAS

We are seeking a skilled and experienced ETL Developer with strong SAS expertise to join

our growing Data Management team in Kolkata. The ideal candidate will be responsible for

designing, developing, implementing, and maintaining ETL processes to extract, transform,

and load data from various source systems into Banking data warehouse and other data

repositories of BFSI. This role requires a strong understanding of Banking data warehousing

concepts, ETL methodologies, and proficiency in SAS programming for data manipulation

and analysis.

Responsibilities:

• Design, develop, and implement ETL solutions using industry best practices and

tools, with a strong focus on SAS.

• Develop and maintain SAS programs for data extraction, transformation, and loading.

• Work with source system owners and data analysts to understand data requirements

and translate them into ETL specifications.

• Build and maintain data pipelines for Banking database to ensure data quality,

integrity, and consistency.

• Perform data profiling, data cleansing, and data validation to ensure accuracy and

reliability of data.

• Troubleshoot and resolve Bank’s ETL-related issues, including data quality problems

and performance bottlenecks.

• Optimize ETL processes for performance and scalability.

• Document ETL processes, data flows, and technical specifications.

• Collaborate with other team members, including data architects, data analysts, and

business users.

• Stay up-to-date with the latest SAS related ETL technologies and best practices,

particularly within the banking and financial services domain.

• Ensure compliance with data governance policies and security standards.

Qualifications:

• Bachelor's degree in Computer Science, Information Technology, or a related field.

• Proven experience as an ETL Developer, preferably within the banking or financial

services industry.

• Strong proficiency in SAS programming for data manipulation and ETL processes.

• Experience with other ETL tools (e.g., Informatica PowerCenter, DataStage, Talend)

is a plus.

• Solid understanding of data warehousing concepts, including dimensional modeling

(star schema, snowflake schema).

• Experience working with relational databases (e.g., Oracle, SQL Server) and SQL.

• Familiarity with data quality principles and practices.

• Excellent analytical and problem-solving skills.

• Strong communication and interpersonal skills.

• Ability to work independently and as part of a team.

• Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.

• Understanding of regulatory requirements in the banking sector (e.g., RBI guidelines)

is an advantage.

Preferred Skills:

• Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Azure

Synapse, Google BigQuery).

• Knowledge of big data technologies (e.g., Hadoop, Spark).

• Experience with agile development methodologies.

• Relevant certifications (e.g., SAS Certified Professional).

What We Offer:

• Competitive salary and benefits package.

• Opportunity to work with cutting-edge technologies in a dynamic environment.

• Exposure to the banking and financial services domain.

• Professional development and growth opportunities.

• A collaborative and supportive work culture.

Read more
Suzuki Digital
Kalpna  Panwar
Posted by Kalpna Panwar
Gurugram
3 - 5 yrs
₹5L - ₹15L / yr
Qlik compose
qlik Replicate
SQL
Data Structures
AWS Simple Notification Service (SNS)
+4 more

Hi Kirti,


Job Title: Data Analytics Engineer

Experience: 3 to 6 years

 Location: Gurgaon (Hybrid)

Employment Type: Full-time



 

Job Description: 

We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers, SQL querying for data warehouses like Amazon Redshift, and experience with Data Lakes using S3. A foundational understanding of Apache Parquet and Python is also desirable. 

Key Responsibilities: 

1. Data Pipeline Development & Maintenance 

  • Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose. 
  • Ensure seamless data replication and transformation across multiple systems. 
  • Implement and optimize CDC-based data pipelines from various source systems. 

2. Data Layering & Warehouse Management 

  • Implement Bronze, Silver, and Gold layer architectures to optimize data workflows. 
  • Design and manage data pipelines for structured and unstructured data
  • Ensure data integrity and quality within Redshift and other analytical data stores. 

3. Database Management & SQL Development 

  • Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift
  • Design and implement data models that support business intelligence and analytics use cases. 

4. Data Lakes & Storage Optimization 

  • Work with AWS S3-based Data Lakes to store and manage large-scale datasets. 
  • Optimize data ingestion and retrieval using Apache Parquet

5. Data Integration & Automation 

  • Integrate diverse data sources into a centralized analytics platform. 
  • Automate workflows to improve efficiency and reduce manual effort. 
  • Leverage Python for scripting, automation, and data manipulation where necessary. 

6. Performance Optimization & Monitoring 

  • Monitor data pipelines for failures and implement recovery strategies. 
  • Optimize data flows for better performance, scalability, and cost-effectiveness
  • Troubleshoot and resolve ETL and data replication issues proactively. 

 

Technical Expertise Required: 

  • 3 to 6 years of experience in Data Engineering, ETL Development, or related roles. 
  • Hands-on experience with Qlik Replicate & Qlik Compose for data integration. 
  • Strong SQL expertise, with experience in writing and optimizing queries for Redshift
  • Experience working with Bronze, Silver, and Gold layer architectures
  • Knowledge of Change Data Capture (CDC) pipelines from multiple sources. 
  • Experience working with AWS S3 Data Lakes
  • Experience working with Apache Parquet for data storage optimization. 
  • Basic understanding of Python for automation and data processing. 
  • Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus. 
  • Strong analytical and problem-solving skills. 
  • Ability to work in a fast-paced, agile environment

 

Preferred Qualifications: 

  • Experience in performance tuning and cost optimization in Redshift. 
  • Familiarity with big data technologies such as Spark or Hadoop. 
  • Understanding of data governance and security best practices
  • Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI. 


Read more
Trika Tech
bhagya a
Posted by bhagya a
Bengaluru (Bangalore), Coimbatore
7 - 8 yrs
₹10L - ₹14L / yr
ETL

Work life balance / Startup / Learning / Good Environment .................................................................

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort