50+ ETL Jobs in India
Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!
Review Criteria
- Strong Senior Data Engineer profile
- 4+ years of hands-on Data Engineering experience
- Must have experience owning end-to-end data architecture and complex pipelines
- Must have advanced SQL capability (complex queries, large datasets, optimization)
- Must have strong Databricks hands-on experience
- Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Must have Power BI integration experience
- CTC has 80% fixed and 20% variable in their ctc structure
Preferred
- Worked on Call center data, understand nuances of data generated in call centers
- Experience implementing data governance, quality checks, or lineage frameworks
- Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture
Job Specific Criteria
- CV Attachment is mandatory
- Are you Comfortable integrating with Power BI datasets?
- We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role & Responsibilities
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
Ideal Candidate
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
ROLES AND RESPONSIBILITIES:
We are looking for a Junior Data Engineer who will work under guidance to support data engineering tasks, perform basic coding, and actively learn modern data platforms and tools. The ideal candidate should have foundational SQL knowledge, basic exposure to Databricks. This role is designed for early-career professionals who are eager to grow into full data engineering responsibilities while contributing to data pipeline operations and analytical support.
Key Responsibilities-
- Support the development and maintenance of data pipelines and ETL/ELT workflows under mentorship.
- Write basic SQL queries, transformations, and assist with Databricks notebook tasks.
- Help troubleshoot data issues and contribute to ensuring pipeline reliability.
- Work with senior engineers and analysts to understand data requirements and deliver small tasks.
- Assist in maintaining documentation, data dictionaries, and process notes.
- Learn and apply data engineering best practices, coding standards, and cloud fundamentals.
- Support basic tasks related to Power BI data preparation or integrations as needed.
IDEAL CANDIDATE:
- Foundational SQL skills with the ability to write and understand basic queries.
- Basic exposure to Databricks, data transformation concepts, or similar data tools.
- Understanding of ETL/ELT concepts, data structures, and analytical workflows.
- Eagerness to learn modern data engineering tools, technologies, and best practices.
- Strong problem-solving attitude and willingness to work under guidance.
- Good communication and collaboration skills to work with senior engineers and analysts.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 5-7 years of experience in data engineering or data platform development, with at least 2–3 years in a lead or architect role.
- Strong hands-on experience in one or more of the following:
- Microsoft Fabric (Data Factory, Lakehouse, Data Warehouse)
- Databricks (Spark, Delta Lake, PySpark, MLflow)
- Snowflake (Data Warehousing, Snowpipe, Performance Optimization)
- Power BI (Data Modeling, DAX, Report Development)
- Proficiency in SQL and programming languages like Python or Scala.
- Experience with Azure, AWS, or GCP cloud data services.
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
ROLES AND RESPONSIBILITIES:
We are seeking a skilled Data Engineer who can work independently on data pipeline development, troubleshooting, and optimisation tasks. The ideal candidate will have strong SQL skills, hands-on experience with Databricks, and familiarity with cloud platforms such as AWS and Azure. You will be responsible for building and maintaining reliable data workflows, supporting analytical teams, and ensuring high-quality, secure, and accessible data across the organisation.
KEY RESPONSIBILITIES:
- Design, develop, and maintain scalable data pipelines and ETL/ELT workflows.
- Build, optimise, and troubleshoot SQL queries, transformations, and Databricks data processes.
- Work with large datasets to deliver efficient, reliable, and high-performing data solutions.
- Collaborate closely with analysts, data scientists, and business teams to support data requirements.
- Ensure data quality, availability, and security across systems and workflows.
- Monitor pipeline performance, diagnose issues, and implement improvements.
- Contribute to documentation, standards, and best practices for data engineering processes.
IDEAL CANDIDATE:
- Proven experience as a Data Engineer or in a similar data-focused role (3+ years).
- Strong SQL skills with experience writing and optimising complex queries.
- Hands-on experience with Databricks for data engineering tasks.
- Experience with cloud platforms such as AWS and Azure.
- Understanding of ETL/ELT concepts, data modelling, and pipeline orchestration.
- Familiarity with Power BI and data integration with BI tools.
- Strong analytical and troubleshooting skills, with the ability to work independently.
- Experience working end-to-end on data engineering workflows and solutions.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
Strong Data engineer profile
Mandatory (Experience 1): Must have 2+ years of hands-on Data Engineering experience.
Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).
Mandatory (Technical 1): Must have strong SQL capability (complex queries + optimization).
Mandatory (Technical 2): Must have hands-on Databricks experience.
Mandatory (Role Requirement): Must be able to work independently, troubleshoot data issues, and manage large datasets.

is a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage.
Skills - Python (Pandas, NumPy) and backend frameworks (FastAPI, Flask), ETL processes, LLM, React.js or Angular, JavaScript or TypeScript.
• Strong proficiency in Python, with experience in data manipulation libraries (e.g., Pandas,
NumPy) and backend frameworks (e.g., FastAPI, Flask).
• Hands-on experience with data engineering and analytics, including data pipelines, ETL
processes, and working with structured/unstructured data.
• Understanding of React.js/ Angular, JavaScript/TypeScript for building responsive user
interfaces.
• Familiarity with AI/ML concepts and eagerness to grow into a deeper AI-focused role.
• Ability to work in cross-functional teams and adapt quickly to evolving technologies.
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
1. Roadmap & Strategy (The "Why")
- Own the product roadmap for the Data Platform, prioritizing features like real-time ingestion, data quality frameworks, and self-service analytics.
- Translate high-level business questions (e.g., "We need to track customer churn in real-time") into technical requirements for ETL pipelines.
- Define Service Level Agreements (SLAs) for data freshness, availability, and quality.
2. Technical Execution (The "What")
- Write detailed Technical Product Requirements (PRDs) that specify source-to-target mappings, transformation logic, and API integration requirements.
- Collaborate with Data Engineers to decide on architecture trade-offs (e.g., Batch vs. Streaming, Build vs. Buy).
- Champion the adoption of Data Observability tools to detect pipeline failures before business users do.
3. Data Governance & Quality
- Define and enforce data modeling standards (Star Schema, Snowflake Schema).
- Ensure compliance with privacy regulations (GDPR/CCPA/DPDP) regarding how data is ingested, stored, and masked.
- Manage the "Data Dictionary" to ensure all stakeholders understand what specific metrics actually mean.
4. Stakeholder Management
- Act as the primary liaison between Data Producers (Software Engineers sending data) and Data Consumers (BI Analysts, Data Scientists).
- Manage dependencies: If the backend team changes a database column, ensure your ETL roadmap accounts for it.
🎓 What We Are Looking For
Technical "Must-Haves":
- SQL Mastery: You can write complex queries to explore data, validate transformations, and debug issues. You don't wait for an analyst to pull data for you.
- ETL Knowledge: Deep understanding of data integration concepts: Change Data Capture (CDC), Batching, Streaming, Upserts, and Idempotency.
- Data Modeling: You understand Dimensional Modeling, Data Lakes vs. Data Warehouses, and normalization/denormalization.
- API Fluency: You understand how to pull data from 3rd party REST/GraphQL APIs and handle rate limits/pagination.
Product Skills:
- Experience writing technical specs for backend/data teams.
- Ability to prioritize technical debt vs. new features.
- Strong communication skills to explain "Data Latency" to non-technical executives.
Preferred Tech Stack Exposure:
- Orchestration: Airflow, Dagster, or Prefect.
- Warehousing: Snowflake, BigQuery, or Redshift.
- Transformation: dbt (data build tool).
- Streaming: Kafka or Kinesis.
As a Data Engineer at PalTech, you will design, develop, and maintain scalable and reliable data pipelines to ensure seamless data flow across systems. You will leverage SQL and leading ETL tools (such as Informatica, ADF, etc.) to support data integration and transformation needs. This role involves building and optimizing data warehouse architectures, performing performance tuning, and ensuring high levels of data quality, accuracy, and consistency throughout the data lifecycle.
You will collaborate closely with cross-functional teams to understand business requirements and translate them into effective data solutions. The ideal candidate should possess strong problem-solving skills, sound knowledge of data architecture principles, and a passion for building clean and efficient data systems.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using SQL and tools such as Informatica, ADF, etc.
- Build and optimize data warehouse and data lake solutions for reporting, analytics, and operational usage.
- Apply strong understanding of data warehousing concepts to architect scalable data solutions.
- Handle large datasets and design effective load/update strategies.
- Collaborate with data analysts, business users, and data scientists to understand requirements and deliver scalable solutions.
- Implement data quality checks and validation frameworks to ensure data reliability and integrity.
- Perform SQL and ETL performance tuning and optimization.
- Work with structured and semi-structured data from various source systems.
- Monitor, troubleshoot, and resolve issues in data workflows.
- Maintain documentation for data pipelines, data flows, and data definitions.
- Follow best practices in data engineering including security, logging, and error handling.
Required Skills & Qualifications
Education:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Technical Skills:
- Strong proficiency in SQL and data manipulation.
- Hands-on experience with ETL tools (e.g., Informatica, Talend, ADF).
- Experience with cloud data warehouse platforms such as BigQuery, Redshift, or Snowflake.
- Strong understanding of data warehousing concepts and data modeling.
- Proficiency in Python or a similar programming language.
- Experience working with RDBMS platforms (e.g., SQL Server, Oracle).
- Familiarity with version control systems and job schedulers.
Experience:
- 4 to 8 years of relevant experience in data engineering and ETL development.
Soft Skills:
- Strong problem-solving skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a cross-functional team environment.
Who We Are
At Sonatype, we help organizations build better, more secure software by enabling them to understand and control their software supply chains. Our products are trusted by thousands of engineering teams globally, providing critical insights into dependency health, license risk, and software security. We’re passionate about empowering developers—and we back it with data.
The Opportunity
We’re looking for a Data Engineer with full stack expertise to join our growing Data Platform team. This role blends data engineering, microservices, and full-stack development to deliver end-to-end services that power analytics, machine learning, and advanced search across Sonatype.
You will design and build data-driven microservices and workflows using Java, Python, and Spring Batch, implement frontends for data workflows, and deploy everything through CI/CD pipelines into AWS ECS/Fargate. You’ll also ensure services are monitorable, debuggable, and reliable at scale, while clearly documenting designs with Mermaid-based sequence and dataflow diagrams.
This is a hands-on engineering role for someone who thrives at the intersection of data systems, fullstack development, ML, and cloud-native platforms.
What You’ll Do
- Design, build, and maintain data pipelines, ETL/ELT workflows, and scalable microservices.
- Development of complex web scraping (Playwright) and realtime pipelines (Kafka/Queues/Flink).
- Develop end-to-end microservices with backend (Java 5+, Python 5+, Spring Batch 2+) and frontend (React or any).
- Deploy, publish, and operate services in AWS ECS/Fargate using CI/CD pipelines (Jenkins, GitOps).
- Architect and optimize data storage models in SQL (MySQL, PostgreSQL) and NoSQL stores.
- Implement web scraping and external data ingestion pipelines.
- Enable Databricks and PySpark-based workflows for large-scale analytics.
- Build advanced data search capabilities (fuzzy matching, vector similarity search, semantic retrieval).
- Apply ML techniques (scikit-learn, classification algorithms, predictive modeling) to data-driven solutions.
- Implement observability, debugging, monitoring, and alerting for deployed services.
- Create Mermaid sequence diagrams, flowcharts, and dataflow diagrams to document system architecture and workflows.
- Drive best practices in fullstack data service development, including architecture, testing, and documentation.
What We’re Looking For
Minimum Qualifications
- 2+ years of experience as a Data Engineer or a Software Backend engineering role
- Strong programming skills in Python, Scala, or Java
- Hands-on experience with HBase or similar NoSQL columnar stores
- Hands-on experience with distributed data systems like Spark, Kafka, or Flink
- Proficient in writing complex SQL and optimizing queries for performance
- Experience building and maintaining robust ETL/ELT pipelines in production
- Familiarity with workflow orchestration tools (Airflow, Dagster, or similar)
- Understanding of data modeling techniques (star schema, dimensional modeling, etc.)
- Familiarity with CI/CD pipelines (Jenkins or similar)
- Ability to visualize and communicate architectures using Mermaid diagrams
Bonus Points
- Experience working with Databricks, dbt, Terraform, or Kubernetes
- Familiarity with streaming data pipelines or real-time processing
- Exposure to data governance frameworks and tools
- Experience supporting data products or ML pipelines in production
- Strong understanding of data privacy, security, and compliance best practices
Why You’ll Love Working Here
- Data with purpose: Work on problems that directly impact how the world builds secure software
- Modern tooling: Leverage the best of open-source and cloud-native technologies
- Collaborative culture: Join a passionate team that values learning, autonomy, and impact
Data Engineer – Validation & Quality
Responsibilities
- Build rule-based and statistical validation frameworks using Pandas / NumPy.
- Implement contradiction detection, reconciliation, and anomaly flagging.
- Design and compute confidence metrics for each evidence record.
- Automate schema compliance, sampling, and checksum verification across data sources.
- Collaborate with the Kernel to embed validation results into every output artifact.
Requirements
- 5 + years in data engineering, data quality, or MLOps validation.
- Strong SQL optimization and ETL background.
- Familiarity with data lineage, DQ frameworks, and regulatory standards (SOC 2 / GDPR).
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
About the Role:
We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Lead the design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 7+ years of experience in data engineering or data platform development, with at least 2–3 years in a lead or architect role.
- Strong hands-on experience in one or more of the following:
- Microsoft Fabric (Data Factory, Lakehouse, Data Warehouse)
- Databricks (Spark, Delta Lake, PySpark, MLflow)
- Snowflake (Data Warehousing, Snowpipe, Performance Optimization)
- Power BI (Data Modeling, DAX, Report Development)
- Proficiency in SQL and programming languages like Python or Scala.
- Experience with Azure, AWS, or GCP cloud data services.
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
At Loyalty Juggernaut, we’re on a mission to revolutionize customer loyalty through AI-driven SaaS solutions. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem with GRAVTY®, our SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.
Our Impact:
- 400+ million members connected through our platform.
- Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.
Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!
Explore more about us at www.lji.io.
What you will OWN:
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using SQL and AWS ‘big data’ technologies.
- Create and maintain optimal data pipeline architecture.
- Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Work with stakeholders, including the Technical Architects, Developers, Product Owners, and Executives, to assist with data-related technical issues and support their data infrastructure needs.
- Create tools for data management and data analytics that can assist them in building and optimizing our product to become an innovative industry leader.
You would make a GREAT FIT if you have:
- Have 2 to 5 years of relevant backend development experience, with solid expertise in Python.
- Possess strong skills in Data Structures and Algorithms, and can write optimized, maintainable code.
- Are familiar with database systems, and can comfortably work with PostgreSQL, as well as NoSQL solutions like MongoDB or DynamoDB.
- Hands-on experience using Cloud Dataware houses like AWS Redshift, GBQ, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch would be an added advantage.
- Have a solid understanding of ETL processes and tools and can build or modify ETL pipelines effectively.
- Have experience managing or building data pipelines and architectures at scale.
- Understand the nuances of data ingestion, transformation, storage, and analytics workflows.
- Communicate clearly and work collaboratively across engineering, product.
Why Choose US?
- This opportunity offers a dynamic and supportive work environment where you'll have the chance to not just collaborate with talented technocrats but also work with globally recognized brands, gain exposure, and carve your own career path.
- You will get to innovate and dabble in the future of technology -Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets, and much more.
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
About NEXUS SP Solutions
European tech company (Spain) in telecom/IT/cybersecurity. We’re hiring a Part-time Automation Developer (20h/week) to build and maintain scripts, integrations and CI/CD around our Odoo v18 + eCommerce stack.
What you’ll do
• Build Python automations: REST API integrations (vendors/payments), data ETL, webhooks, cron jobs.
• Maintain CI/CD (GitHub Actions) for modules and scripts; basic Docker.
• Implement backups/alerts and simple monitors (logs, retries).
• Collaborate with Full-Stack dev and UX on delivery and performance.
Requirements
• 2–5 yrs coding in Python for integrations/ETL.
• REST/JSON, OAuth2, webhooks; solid Git.
• Basic Docker + GitHub Actions (or GitLab CI).
• SQL/PostgreSQL basics; English for daily comms (Spanish/French is a plus).
• ≥ 3h overlap with CET; able to start within 15–30 days.
Nice to have
• Odoo RPC/XML-RPC, Selenium/Playwright, Linux server basics, retry/idempotency patterns.
Compensation & terms
• ₹2.5–5 LPA for 20h/week (contract/retainer).
• Long-term collaboration; IP transfer, work in our repos; PR-based workflow; CI/CD.
Process
- 30–45’ technical call. 2) Paid mini-task (8–10h): Python micro-service calling a REST API with retries, logging and unit test. 3) Offer.
Job Title : Informatica Cloud Developer / Migration Specialist
Experience : 6 to 10 Years
Location : Remote
Notice Period : Immediate
Job Summary :
We are looking for an experienced Informatica Cloud Developer with strong expertise in Informatica IDMC/IICS and experience in migrating from PowerCenter to Cloud.
The candidate will be responsible for designing, developing, and maintaining ETL workflows, data warehouses, and performing data integration across multiple systems.
Mandatory Skills :
Informatica IICS/IDMC, Informatica PowerCenter, ETL Development, SQL, Data Migration (PowerCenter to IICS), and Performance Tuning.
Key Responsibilities :
- Design, develop, and maintain ETL processes using Informatica IICS/IDMC.
- Work on migration projects from Informatica PowerCenter to IICS Cloud.
- Troubleshoot and resolve issues related to mappings, mapping tasks, and taskflows.
- Analyze business requirements and translate them into technical specifications.
- Conduct unit testing, performance tuning, and ensure data quality.
- Collaborate with cross-functional teams for data integration and reporting needs.
- Prepare and maintain technical documentation.
Required Skills :
- 4 to 5 years of hands-on experience in Informatica Cloud (IICS/IDMC).
- Strong experience with Informatica PowerCenter.
- Proficiency in SQL and data warehouse concepts.
- Good understanding of ETL performance tuning and debugging.
- Excellent communication and problem-solving skills.
Job Title: Data Engineer / Integration Engineer
Job Summary:
We are seeking a highly skilled Data Engineer / Integration Engineer to join our team. The ideal candidate will have expertise in Python, workflow orchestration, cloud platforms (GCP/Google BigQuery), big data frameworks (Apache Spark or similar), API integration, and Oracle EBS. The role involves designing, developing, and maintaining scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. Knowledge of Ascend.io is a plus.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines and workflows.
- Develop and optimize ETL/ELT processes using Python and workflow automation tools.
- Implement and manage data integration between various systems, including APIs and Oracle EBS.
- Work with Google Cloud Platform (GCP) or Google BigQuery (GBQ) for data storage, processing, and analytics.
- Utilize Apache Spark or similar big data frameworks for efficient data processing.
- Develop robust API integrations for seamless data exchange between applications.
- Ensure data accuracy, consistency, and security across all systems.
- Monitor and troubleshoot data pipelines, identifying and resolving performance issues.
- Collaborate with data analysts, engineers, and business teams to align data solutions with business goals.
- Document data workflows, processes, and best practices for future reference.
Required Skills & Qualifications:
- Strong proficiency in Python for data engineering and workflow automation.
- Experience with workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
- Hands-on experience with Google Cloud Platform (GCP) or Google BigQuery (GBQ).
- Expertise in big data processing frameworks, such as Apache Spark.
- Experience with API integrations (REST, SOAP, GraphQL) and handling structured/unstructured data.
- Strong problem-solving skills and ability to optimize data pipelines for performance.
- Experience working in an agile environment with CI/CD processes.
- Strong communication and collaboration skills.
Preferred Skills & Nice-to-Have:
- Experience with Ascend.io platform for data pipeline automation.
- Knowledge of SQL and NoSQL databases.
- Familiarity with Docker and Kubernetes for containerized workloads.
- Exposure to machine learning workflows is a plus.
Why Join Us?
- Opportunity to work on cutting-edge data engineering projects.
- Collaborative and dynamic work environment.
- Competitive compensation and benefits.
- Professional growth opportunities with exposure to the latest technologies.
How to Apply:
Interested candidates can apply by sending their resume to [your email/contact].
We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in
building large-scale data pipelines, real-time streaming solutions, and batch/stream
processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and
AWS Big Data services, with hands-on experience in implementing CDC (Change Data
Capture) pipelines and integrating multiple data sources and sinks.
Responsibilities
- Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
- Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
- Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
- Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
- Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
- Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
- Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
- Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
- Implement monitoring, logging, and alerting for critical data pipelines.
- Follow best practices for data security, compliance, and cost optimization in cloud environments.
Required Skills & Experience
- Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
- Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
- Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
- CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
- AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
- ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
- Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
- Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
- Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
- Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
- Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
- Functions for workflow orchestration.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience in large-scale data lake / lake house architectures.
- Knowledge of data warehousing concepts and query optimisation.
- Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
- Exposure to ML/AI data pipelines is a plus.
Tools & Technologies (must-have exposure)
- Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
- Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
- Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
- Programming & Scripting: Python, SQL, Bash
- Orchestration: Airflow / Step Functions
- Version Control & CI/CD: Git, Jenkins/CodePipeline
- Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
Required Qualifications
- Bachelor’s degree Commerce background / MBA Finance (mandatory).
- 3+ years of hands-on implementation/project management experience
- Proven experience delivering projects in Fintech, SaaS, or ERP environments
- Strong expertise in accounting principles, R2R (Record-to-Report), treasury, and financial workflows.
- Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
- Experience working with ETL pipelines or data migration processes
- Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
- Strong communication and stakeholder management skills
- Ability to manage multiple projects simultaneously and drive client success
Preferred Qualifications
- Prior experience implementing financial automation tools (e.g., SAP, Oracle, Anaplan, Blackline)
- Familiarity with API integrations and basic data mapping
- Experience in agile/scrum-based implementation environments
- Exposure to reconciliation, book closure, AR/AP, and reporting systems
- PMP, CSM, or similar certifications
Required Skills and Qualifications :
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Modeler or in a similar role at a asset manager or financial firm.
- Strong Understanding of various business concepts related to buy side financial firms. Understanding of Private Markets (Private Credit, Private Equity, Real Estate, Alternatives) is required.
- Strong understanding of database design principles and data modeling techniques (e.g., ER modeling, dimensional modeling).
- Knowledge of SQL and experience with relational databases (e.g., Oracle, SQL Server, MySQL).
- Familiarity with NoSQL databases is a plus.
- Excellent analytical and problem-solving skills.
- Strong communication skills and the ability to work collaboratively.
Preferred Qualifications:
- Experience in data warehousing and business intelligence.
- Knowledge of data governance practices.
- Certification in data modeling or related fields.
Key Responsibilities :
- Design and develop conceptual, logical, and physical data models based on business requirements.
- Collaborate with stakeholders in finance, operations, risk, legal, compliance and front offices to gather and analyze data requirements.
- Ensure data models adhere to best practices for data integrity, performance, and security.
- Create and maintain documentation for data models, including data dictionaries and metadata.
- Conduct data profiling and analysis to identify data quality issues.
- Conduct detailed meetings and discussions with business to translate broad business functionality requirements into data concepts, data models and data products.
💡 Transform Banking Data with Us!
We’re on the lookout for a Senior Denodo Developer (Remote) to shape the future of data virtualization in the banking domain. If you’re passionate about turning complex financial data into actionable insights, this role is for you! 🚀
✨ What You’ll Do:
✔ Build cutting-edge Denodo-based data virtualization solutions
✔ Collaborate with banking SMEs, architects & analysts
✔ Design APIs, data services & scalable models
✔ Ensure compliance with global banking standards
✔ Mentor juniors & drive best practices
💼 What We’re Looking For:
🔹 6+ years of IT experience (3+ years in Denodo)
🔹 Strong in Denodo VDP, Scheduler & Data Catalog
🔹 Skilled in SQL, optimization & performance tuning
🔹 Banking/Financial services domain expertise (CBS, Payments, KYC/AML, Risk & Compliance)
🔹 Cloud knowledge (AWS, Azure, GCP)
📍 Location: Remote
🎯 Experience: 6+ years
🌟 Catchline for candidates:
👉 “If you thrive in the world of data and want to make banking smarter, faster, and more secure — this is YOUR chance!”
📩 Apply Now:
- Connect with me here on Cutshort and share your resume/message directly.
Let’s build something great together 🚀
#WeAreHiring #DenodoDeveloper #BankingJobs #RemoteWork #DataVirtualization #FinTechCareers #DataIntegration #TechTalent
• Technical expertise in the area of development of Master Data Management, data extraction, transformation, and load (ETL) applications, big data using existing and emerging technology platforms and cloud architecture
• Functions as lead developer• Support System Analysis, Technical/Data design, development, unit testing, and oversee end-to-end data solution.
• Technical SME in Master Data Management application, ETL, big data and cloud technologies
• Collaborate with IT teams to ensure technical designs and implementations account for requirements, standards, and best practices
• Performance tuning of end-to-end MDM, database, ETL, Big data processes or in the source/target database endpoints as needed.
• Mentor and advise junior members of team to provide guidance.
• Perform a technical lead and solution lead role for a team of onshore and offshore developers
📢 DATA SOURCING & ANALYSIS EXPERT (L3 Support) – Mumbai 📢
Are you ready to supercharge your Data Engineering career in the financial domain?
We’re seeking a seasoned professional (5–7 years experience) to join our Mumbai team and lead in data sourcing, modelling, and analysis. If you’re passionate about solving complex challenges in Relational & Big Data ecosystems, this role is for you.
What You’ll Be Doing
- Translate business needs into robust data models, program specs, and solutions
- Perform advanced SQL optimization, query tuning, and L3-level issue resolution
- Work across the entire data stack: ETL, Python / Spark, Autosys, and related systems
- Debug, monitor, and improve data pipelines in production
- Collaborate with business, analytics, and engineering teams to deliver dependable data services
What You Should Bring
- 5+ years in financial / fintech / capital markets environment
- Proven expertise in relational databases and big data technologies
- Strong command over SQL tuning, query optimization, indexing, partitioning
- Hands-on experience with ETL pipelines, Spark / PySpark, Python scripting, job scheduling (e.g. Autosys)
- Ability to troubleshoot issues at the L3 level, root cause analysis, performance tuning
- Good communication skills — you’ll coordinate with business users, analytics, and tech teams
We are looking for an experienced DB2 developer/DBA who has worked in a critical application
with large sized Database. The role requires the candidate to understand the landscape of the
application and the data including its topology across the Online Data store and the Data
Warehousing counter parts. The challenges we strive to solve include scalability/performance
related to dealing with very large data sets and multiple data sources.
The role involves collaborating with global team members and provides a unique opportunity to
network with a diverse group of people.
The candidate who fills this role of a Database developer in our team will be involved in building
and creating solutions from the requirements stage through deployment. A successful candidate
is self-motivated, innovative, thinks outside the box, has excellent communication skills and can
work with clients and stakeholders from both the business and technology with
ease.
Required Skills:
Expertise in writing complex data retrieval queries, stored procs and performance tuning
Experience in migrating large scale database from Sybase to a new tech stack
Expertise in relational DB: Sybase, AZURE SQL Server, DB2 and nosgl databases
Strong knowledge in Linux Shell Scripting
Working knowledge of Python programming
Working knowledge of Informatica
Good knowledge of Autosys or any such scheduling tool
Detail oriented, ability to turn deliverables around quickly with high degree of accuracy Strong
analytical skills, ability to interpret business requirements and produce functional and technical
design documents
Good time management skills - ability to prioritize and multi-task, handling multiple efforts at
once
Strong desire to understand and learn domain.
Desired Skills:
Experience in Sybase, AZURE SQL Server, DB2
Experience in migrating relational database to modern tech stack
Experience in a financial services/banking industry
About Us:
PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.
Role Overview:
Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.
Responsibilities:
- Assist in building and maintaining ETL/ELT pipelines for data processing
- Write SQL queries to extract and analyze data from various sources
- Support data quality checks and basic data validation processes
- Create simple dashboards and reports using visualization tools
- Learn and work with Oracle Cloud services under guidance
- Use Python for basic data manipulation and cleaning tasks
- Document data processes and maintain data dictionaries
- Collaborate with team members to understand data requirements
- Participate in troubleshooting data issues with senior support
- Contribute to data migration tasks as needed
Qualifications:
Required:
- Bachelor's degree in Computer Science, Information Systems, or related field
- around 2 years of experience in data engineering or related field
- Strong SQL knowledge and database concepts
- Comfortable with Python programming
- Understanding of data structures and ETL concepts
- Problem-solving mindset and attention to detail
- Good communication skills
- Willingness to learn cloud technologies
Preferred:
- Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
- Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
- Experience with Pandas for data manipulation
- Understanding of data warehousing concepts
- Familiarity with version control (Git)
- Academic projects or internships involving data processing
Nice-to-Have:
- Knowledge of dbt, BigQuery, or Snowflake
- Exposure to big data concepts
- Experience with Jupyter notebooks
- Comfort with AI-assisted coding tools (Copilot, GPTs)
- Personal projects showcasing data work
What We Offer:
- Mentorship from senior data engineers
- Hands-on learning with modern data stack
- Access to paid AI tools and learning resources
- Clear growth path to mid-level engineer
- Direct impact on product and data strategy
- No unnecessary meetings — focused execution
- Strong engineering culture with continuous learning opportunities
Must have skills:
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data
from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical
data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source
applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data
warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,
design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and
data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data
requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into
reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,
quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!
✅ Key Details
- Work Type: Freelance / Contract
- Location: Remote
- Time Zones: IST / EST only
- Domain: Data & AI, Cloud, Big Data, Machine Learning
- Collaboration: Work with industry leaders on innovative projects
🔹 Open Roles
1. Databricks – Senior Consultant
- Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
- Experience: 6+ years
2. Databricks – ML Engineer
- Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
- Experience: 4+ years
3. Databricks – Solution Architect
- Skills: Azure, GCP, AWS, CI/CD, MLOps
- Experience: 7+ years
4. Databricks – Solution Consultant
- Skills: SQL, Spark, BigQuery, Python, Scala
- Experience: 2+ years
✅ What We Offer
- Opportunity to work with top-tier professionals and clients
- Exposure to cutting-edge technologies and real-world data challenges
- Flexible remote work environment aligned with IST / EST time zones
- Competitive compensation and growth opportunities
📌 Skills We Value
Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |
- 4= years of experience
- Proficiency in Python programming.
- Experience with Python Service Development (RestAPI/FlaskAPI)
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with Cloud (Azure /AWS) technologies
Role: Technical Lead - Finance Solutions
Exp: 3 - 6 Years
CTC: up to 20 LPA
Required Qualifications
- Bachelor’s degree in Finance, Business Administration, Information Systems, or related field
- 3+ years of hands-on implementation/project management experience
- Proven experience delivering projects in Fintech, SaaS, or ERP environments
- Strong understanding of accounting principles and financial workflows
- Hands-on SQL experience, including the ability to write and debug complex queries (joins, CTEs, subqueries)
- Experience working with ETL pipelines or data migration processes
- Proficiency in tools like Jira, Confluence, Excel, and project tracking systems
- Strong communication and stakeholder management skills
- Ability to manage multiple projects simultaneously and drive client success
Job Title: Lead Data Engineer
📍 Location: Pune
🧾 Experience: 10+ Years
💰 Budget: Up to 1.7 LPM
Responsibilities
- Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
- Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
- Optimize SQL queries for performance and cost efficiency.
- Ensure data quality, reliability, and security across pipelines and datasets.
- Implement Snowflake best practices for performance, scaling, and governance.
- Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
- Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.
Exp: 10+ Years
CTC: 1.7 LPM
Location: Pune
SnowFlake Expertise Profile
Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.
Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.
Responsibilities:
* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience
* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for
ETL or BI.
* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.
* Ensure data quality, integrity, and security within the Snowflake environment.
* Participate in code reviews and contribute to the team's development standards.
Education:
* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.
* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).
To be successful in this role, you should possess
• Collaborate closely with Product Management and Engineering leadership to devise and build the
right solution.
• Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big
Data tools and frameworks required to solve Big Data problems at scale.
• Design and implement systems to cleanse, process, and analyze large data sets using distributed
processing tools like Akka and Spark.
• Understanding and critically reviewing existing data pipelines, and coming up with ideas in
collaboration with Technical Leaders and Architects to improve upon current bottlenecks
• Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior
Individual contributor on the multiple products and features we have.
• 3+ years of experience in developing highly scalable Big Data pipelines.
• In-depth understanding of the Big Data ecosystem including processing frameworks like Spark,
Akka, Storm, and Hadoop, and the file types they deal with.
• Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.
• Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design
Patterns when required.
• Experience with Git and build tools like Gradle/Maven/SBT.
• Strong understanding of object-oriented design, data structures, algorithms, profiling, and
optimization.
• Have elegant, readable, maintainable and extensible code style.
You are someone who would easily be able to
• Work closely with the US and India engineering teams to help build the Java/Scala based data
pipelines
• Lead the India engineering team in technical excellence and ownership of critical modules; own
the development of new modules and features
• Troubleshoot live production server issues.
• Handle client coordination and be able to work as a part of a team, be able to contribute
independently and drive the team to exceptional contributions with minimal team supervision
• Follow Agile methodology, JIRA for work planning, issue management/tracking
Additional Project/Soft Skills:
• Should be able to work independently with India & US based team members.
• Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.
• Strong sense of urgency, with a passion for accuracy and timeliness.
• Ability to work calmly in high pressure situations and manage multiple projects/tasks.
• Ability to work independently and possess superior skills in issue resolution.
• Should have the passion to learn and implement, analyze and troubleshoot issues
Role Overview
We are seeking a skilled and highly motivated ETL Developer to fill a key role working on distributed team, in a dynamic fast-paced environment. This project is an enterprise-wide distributed system with users worldwide.
This hands-on role requires the candidate to work collaboratively in a squad following a Scaled Agile development methodology. You must be a self-starter, delivery-focused, and possess a broad set of technology skills.
We will count on you to:
- Designs, codes, tests and debugs new and existing software applications primarily using ETL technologies and relational database languages.
- Excellent documentation and presentation skills, analytical and critical thinking skills, and the ability to identify needs and take initiative
- Proven expertise working on large scale enterprise applications
- Working on Agile/Scrum/Spotify development methodology
- Quickly learn new technologies, solve complex problems and be able to ramp up on new projects quickly.
- Communicate effectively and be able to review ones work as well as others with a particular attention to accuracy and detail.
- The candidate must demonstrate a great knowledge of ETL technology and be able to work effectively on distributed components.
- Investigate research and correct defects effectively and efficiently.
- Ensure code meets specifications, quality and security standards, and is maintainable
- Complete work within prescribed standards and follow prescribed workflow process.
- Unit test software components efficiently and effectively
- Ensure that solution requirements are gathered accurately, understood, and that all stakeholders have transparency on impacts
- Follow engineering best practices and principles within your organisation
- Work closely with a Lead Software Engineer
- Build strong relationships with members of your engineering squad
What you need to have:
- Proven track record of successfully delivering software solutions
- The ability to communicate effectively to both technical and non-technical colleagues in a cross-functional environment
- Some experience or knowledge of working with Agile at Scale, Lean and Continuous Delivery approaches such as Continuous Integration, Test-Driven Development and Infrastructure as Code
- Some experience with cloud native software architectures
- Proven experience in the remediation of SAST/DAST findings
- Understanding of CI/CD and DevOps practices
- Strong Self-starter and active squad contributor
Technical Skills or Qualifications Required:
Mandatory Skills:
- Strong ETL Skills: Snap logic
- Expertise on Relational Databases Oracle, SSMS and familiar with NO SQL DB MongoDB
- Knowledge of data warehousing concepts and data modelling
- Experience of performing validations on large-scale data
- Strong Rest API ,JSON’s and Data transformations experience
- Experience with Unit Testing and Integration Testing
- Knowledge of SDLC processes, practices, and experience with some or all of: Confluence, JIRA, ADO, Github etc.
Role Overview:
We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.
Responsibilities:
- Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
- Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
- Define data governance policies and procedures to ensure data quality, security, and compliance.
- Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
- Develop and execute data migration strategies to Oracle Cloud.
- Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
- Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
- Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
- Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
- Ensure the performance and reliability of data visualization dashboards and reports.
- Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
- Troubleshoot data-related issues and provide timely resolutions.
- Document data architectures, data flows, and data visualization solutions.
- Participate in the evaluation and selection of new data technologies and tools.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
- Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role.
- Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
- Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
- Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
- Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
- Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
- Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights.
- Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences.
- Experience with data governance and data quality principles.
- Familiarity with agile development methodologies.
- Ability to work independently and collaboratively within a team environment.
Application Link- https://forms.gle/km7n2WipJhC2Lj2r5
Job Title: IntelliMatch
Location: Bangalore (Hydrid)
Experience: 6+ Years
Employment Type: Full-time
Role:
We are looking for a highly skilled and experienced professional with deep knowledge in IntelliMatch and Microsoft SQL Server (MSSQL) to join our dynamic team. The ideal candidate should have a strong understanding of financial reconciliations and hands-on experience implementing IntelliMatch solutions in enterprise environments. Working experience with ETL tools preferably Informatica.
Key Responsibilities:
- Design, configure, and implement reconciliation processes using IntelliMatch.
- Manage and optimize data processing workflows in MSSQL for high-performance reconciliation systems.
- Working experience with ETL tools preferably Informatica
- Collaborate with business analysts and stakeholders to gather reconciliation requirements and translate them into technical solutions.
- Troubleshoot and resolve complex issues in the reconciliation process.
- Ensure data accuracy, integrity, and compliance with business rules and controls.
- Support end-to-end testing and deployment processes.
- Document technical solutions and maintain configuration records.
- 6+ years of IT experience with a strong focus on IntelliMatch (FIS) implementation and support.
- Hands-on expertise in MSSQL writing complex queries, stored procedures, performance tuning, etc.
- Strong knowledge of reconciliation workflows in financial services.
- Ability to work independently in a fast-paced environment.
Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.
· Technical expertiseregarding data models, database design development, data mining and segmentation techniques
· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks
· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features
· Hands on experience in data visualization tools – Power BI preferred
· Solid understanding of machine learning
· Knowledge of data management and visualization techniques
· A knack for statistical analysis and predictive modeling
· Good knowledge of Python and Matlab
· Experience with SQL and NoSQL databases including ability to write complex queries and procedures
Key Responsibilities
- Data Architecture & Pipeline Development
- Design, implement, and optimize ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Integrate structured, semi-structured, and unstructured data from multiple sources.
- Data Storage & Management
- Develop and maintain Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake solutions.
- Ensure proper indexing, partitioning, and storage optimization for performance.
- Data Governance & Security
- Implement role-based access control, data encryption, and compliance with GDPR/CCPA.
- Ensure metadata management and data lineage tracking with Azure Purview or similar tools.
- Collaboration & Stakeholder Engagement
- Work closely with BI developers, analysts, and business teams to translate requirements into data solutions.
- Provide technical guidance and best practices for data integration and transformation.
- Monitoring & Optimization
- Set up monitoring and alerting for data pipelines.
- Experience:
- 7+ years of experience in ETL development using IBM DataStage.
- Hands-on experience with designing, developing, and maintaining ETL jobs for data warehousing or business intelligence solutions.
- Experience with data integration across relational databases (e.g., IBM DB2, Oracle, MS SQL Server), flat files, and other data sources.
- Technical Skills:
- Strong proficiency in IBM DataStage (Designer, Director, Administrator, and Manager components).
- Expertise in SQL and database programming (e.g., PL/SQL, T-SQL).
- Familiarity with data warehousing concepts, data modeling, and ETL/ELT processes.
- Experience with scripting languages (e.g., UNIX shell scripting) for automation.
- Knowledge of CI/CD tools (e.g., Git, BitBucket, Artifactory) and Agile methodologies.
- Familiarity with IBM Watsonx.data integration or other ETL tools (e.g., Informatica, Talend) is a plus.
- Experience with big data technologies (e.g., Hadoop) is an advantage.
- Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills to collaborate with stakeholders and cross-functional teams.
- Ability to work independently and manage multiple priorities in a fast-paced environment.
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
- 5 -10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Role: Cleo EDI Solution Architect / Sr EDI Developer
Location : Remote
Start Date – asap
This is a niche technology (Cleo EDI), which enables the integration of ERP with Transp. Mgt/Extended Supply Chain etc
Expertise in designing and developing end-to-end integration solutions, especially B2B integrations involving EDI (Electronic Data Interchange) and APIs.
Familiarity with Cleo Integration Cloud or similar EDI platforms.
Strong experience with Azure Integration Services, particularly:
- Azure Data Factory – for orchestrating data movement and transformation
- Azure Functions – for serverless compute tasks in integration pipelines
- Azure Logic Apps or Service Bus – for message handling and triggering workflows
Understanding of ETL/ELT processes and data mapping.
Solid grasp of EDI standards (e.g., X12, EDIFACT) and workflows.
Experience working with EDI developers and analysts to align business requirements with technical implementation.
Familiarity with Cleo EDI tools or similar platforms.
Develop and maintain EDI integrations using Cleo Integration Cloud (CIC), Cleo Clarify, or similar Cleo solutions.
Create, test, and deploy EDI maps for transactions such as 850, 810, 856, etc., and other EDI/X12/EDIFACT documents.
Configure trading partner setups, including communication protocols (AS2, SFTP, FTP, HTTPS).
Monitor EDI transaction flows, identify errors, troubleshoot, and implement fixes.
Collaborate with business analysts, ERP teams, and external partners to gather and analyze EDI requirements.
Document EDI processes, mappings, and configurations for ongoing support and knowledge sharing.
Provide timely support for EDI-related incidents, ensuring minimal disruption to business operations.
Participate in EDI onboarding projects for new trading partners and customers.
🚀 Hiring: Manual Tester
⭐ Experience: 5+ Years
📍 Location: Pan India
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Must-Have Skills:
✅5+ years of experience in Manual Testing
✅Solid experience in ETL, Database, and Report Testing
✅Strong expertise in SQL queries, RDBMS concepts, and DML/DDL operations
✅Working knowledge of BI tools such as Power BI
✅Ability to write effective Test Cases and Test Scenarios
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain
Designing, building, and automating ETL processes using AWS services like Apache Sqoop, AWS S3, AWS CLI, Amazon
EMR, Amazon MSK, Amazon Sagemaker.
∙Developing and maintaining data pipelines to move and transform data from diverse sources into data warehouses or
data lakes.
∙Ensuring data quality and integrity through validation, cleansing, and monitoring ETL processes.
∙Optimizing ETL workflows for performance, scalability, and cost efficiency within the AWS environment.
∙Troubleshooting and resolving issues related to data processing and ETL workflows.
∙Implementing and maintaining security measures and compliance standards for data pipelines and infrastructure.
∙Documenting ETL processes, data mappings, and system architecture.
We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).
About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Job Description : Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals.
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.
Location: Bangalore – Hebbal – 5 Days - WFO
Type: Contract – 6 Months to start with, extendable
Experience Required: 5+ years in Data Analysis, with ERP migration experience
Key Responsibilities:
- Analyze and map data from SAP to JD Edwards structures.
- Define data transformation rules and business logic.
- Assist with data extraction, cleansing, and enrichment.
- Collaborate with technical teams to design and execute ETL processes.
- Perform data validation and reconciliation before and after migration.
- Work closely with business stakeholders to understand master and transactional data requirements.
- Support the creation of reports to validate data accuracy in JDE.
- Document data mapping, cleansing rules, and transformation processes.
- Participate in testing cycles and assist with UAT data validation.
Required Skills and Qualifications:
- Strong experience in SAP ERP data models (FI, MM, SD, etc.).
- Knowledge of JD Edwards EnterpriseOne data structure is a plus.
- Proficiency in Excel, SQL, and data profiling tools.
- Experience in data migration tools like SAP BODS, Talend, or Informatica.
- Strong analytical, problem-solving, and documentation skills.
- Excellent communication and collaboration skills.
- ERP migration project experience is essential.




















