16+ ELT Jobs in India
Apply to 16+ ELT Jobs on CutShort.io. Find your next job, effortlessly. Browse ELT Jobs and apply today!


Job Title: Senior Data Engineer
Location: Bangalore | Hybrid
Company: krtrimaIQ Cognitive Solutions
Role Overview:
As a Senior Data Engineer, you will design, build, and optimize robust data foundations and end-to-end solutions to unlock maximum value from data across the organization. You will play a key role in fostering data-driven thinking, not only within the IT function but also among broader business stakeholders. You will serve as a technology and subject matter expert, providing mentorship to junior engineers and translating the company’s vision and Data Strategy into actionable, high-impact IT solutions.
Key Responsibilities:
- Design, develop, and implement scalable data solutions to support business objectives and drive digital transformation.
- Serve as a subject matter expert in data engineering, providing guidance and mentorship to junior team members.
- Enable and promote data-driven culture throughout the organization, engaging both technical and business stakeholders.
- Lead the design and delivery of Data Foundation initiatives, ensuring adoption and value realization across business units.
- Collaborate with business and IT teams to capture requirements, design optimal data models, and deliver high-value insights.
- Manage and drive change management, incident management, and problem management processes related to data platforms.
- Present technical reports and actionable insights to stakeholders and leadership teams, acting as the expert in Data Analysis and Design.
- Continuously improve efficiency and effectiveness of solution delivery, driving down costs and reducing implementation times.
- Contribute to organizational knowledge-sharing and capability building (e.g., Centers of Excellence, Communities of Practice).
- Champion best practices in code quality, DevOps, CI/CD, and data governance throughout the solution lifecycle.
Key Characteristics:
- Technology expert with a passion for continuous learning and exploring multiple perspectives.
- Deep expertise in the data engineering/technology domain, with hands-on experience across the full data stack.
- Excellent communicator, able to bridge the gap between technical teams and business stakeholders.
- Trusted leader, respected across levels for subject matter expertise and collaborative approach.
Mandatory Skills & Experience:
- Mastery in public cloud platforms: AWS, Azure, SAP
- Mastery in ELT (Extract, Load, Transform) operations
- Advanced data modeling expertise for enterprise data platforms
Hands-on skills:
- Data Integration & Ingestion
- Data Manipulation and Processing
- Source/version control and DevOps tools: GITHUB, Actions, Azure DevOps
- Data engineering/data platform tools: Azure Data Factory, Databricks, SQL Database, Synapse Analytics, Stream Analytics, AWS Glue, Apache Airflow, AWS Kinesis, Amazon Redshift, SonarQube, PyTest
- Experience building scalable and reliable data pipelines for analytics and other business applications
Optional/Preferred Skills:
- Project management experience, especially running or contributing to Scrum teams
- Experience working with BPC (Business Planning and Consolidation), Planning tools
- Exposure to working with external partners in the technology ecosystem and vendor management
What We Offer:
- Opportunity to leverage cutting-edge technologies in a high-impact, global business environment
- Collaborative, growth-oriented culture with strong community and knowledge-sharing
- Chance to influence and drive key data initiatives across the organization

The Opportunity
We’re looking for a Senior Data Engineer to join our growing Data Platform team. This role is a hybrid of data engineering and business intelligence, ideal for someone who enjoys solving complex data challenges while also building intuitive and actionable reporting solutions.
You’ll play a key role in designing and scaling the infrastructure and pipelines that power analytics, dashboards, machine learning, and decision-making across Sonatype. You’ll also be responsible for delivering clear, compelling, and insightful business intelligence through tools like Looker Studio and advanced SQL queries.
What You’ll Do
- Design, build, and maintain scalable data pipelines and ETL/ELT processes.
- Architect and optimize data models and storage solutions for analytics and operational use.
- Create and manage business intelligence reports and dashboards using tools like Looker Studio, Power BI, or similar.
- Collaborate with data scientists, analysts, and stakeholders to ensure datasets are reliable, meaningful, and actionable.
- Own and evolve parts of our data platform (e.g., Airflow, dbt, Spark, Redshift, or Snowflake).
- Write complex, high-performance SQL queries to support reporting and analytics needs.
- Implement observability, alerting, and data quality monitoring for critical pipelines.
- Drive best practices in data engineering and business intelligence, including documentation, testing, and CI/CD.
- Contribute to the evolution of our next-generation data lakehouse and BI architecture.
What We’re Looking For
Minimum Qualifications
- 5+ years of experience as a Data Engineer or in a hybrid data/reporting role.
- Strong programming skills in Python, Java, or Scala.
- Proficiency with data tools such as Databricks, data modeling techniques (e.g., star schema, dimensional modeling), and data warehousing solutions like Snowflake or Redshift.
- Hands-on experience with modern data platforms and orchestration tools (e.g., Spark, Kafka, Airflow).
- Proficient in SQL with experience in writing and optimizing complex queries for BI and analytics.
- Experience with BI tools such as Looker Studio, Power BI, or Tableau.
- Experience in building and maintaining robust ETL/ELT pipelines in production.
- Understanding of data quality, observability, and governance best practices.
Bonus Points
- Experience with dbt, Terraform, or Kubernetes.
- Familiarity with real-time data processing or streaming architectures.
- Understanding of data privacy, compliance, and security best practices in analytics and reporting.
Why You’ll Love Working Here
- Data with purpose: Work on problems that directly impact how the world builds secure software.
- Full-spectrum impact: Use both engineering and analytical skills to shape product, strategy, and operations.
- Modern tooling: Leverage the best of open-source and cloud-native technologies.
- Collaborative culture: Join a passionate team that values learning, autonomy, and real-world impact.

About the Role
We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable.
What You’ll Do
- Build and maintain reliable data pipelines and ETL/ELT workflows.
- Develop and optimize data models for analytics and internal tools.
- Work with team members to deliver clean, trusted datasets.
- Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake.
- Monitor data pipelines for quality, performance, and reliability.
- Write clear documentation and contribute to test coverage and CI/CD processes.
- Help shape our data lakehouse architecture and platform roadmap.
What You Need
- 2–4 years of experience in data engineering or a backend data-related role.
- Strong skills in Python or another backend programming language.
- Experience working with SQL and distributed data systems (e.g., Spark, Kafka).
- Familiarity with NoSQL stores like HBase or similar.
- Comfortable writing efficient queries and building data workflows.
- Understanding of data modeling for analytics and reporting.
- Exposure to tools like Airflow or other workflow schedulers.
Bonus Points
- Experience with DBT, Databricks, or real-time data pipelines.
- Familiarity with cloud infrastructure tools like Terraform or Kubernetes.
- Interest in data governance, ML pipelines, or compliance standards.
Why Join Us?
- Work on data that supports meaningful software security outcomes.
- Use modern tools in a cloud-first, open-source-friendly environment.
- Join a team that values clarity, learning, and autonomy.
If you're excited about building impactful software and helping others do the same, this is an opportunity to grow as a technical leader and make a meaningful impact.
Role: Cleo EDI Solution Architect / Sr EDI Developer
Location : Remote
Start Date – asap
This is a niche technology (Cleo EDI), which enables the integration of ERP with Transp. Mgt/Extended Supply Chain etc
Expertise in designing and developing end-to-end integration solutions, especially B2B integrations involving EDI (Electronic Data Interchange) and APIs.
Familiarity with Cleo Integration Cloud or similar EDI platforms.
Strong experience with Azure Integration Services, particularly:
- Azure Data Factory – for orchestrating data movement and transformation
- Azure Functions – for serverless compute tasks in integration pipelines
- Azure Logic Apps or Service Bus – for message handling and triggering workflows
Understanding of ETL/ELT processes and data mapping.
Solid grasp of EDI standards (e.g., X12, EDIFACT) and workflows.
Experience working with EDI developers and analysts to align business requirements with technical implementation.
Familiarity with Cleo EDI tools or similar platforms.
Develop and maintain EDI integrations using Cleo Integration Cloud (CIC), Cleo Clarify, or similar Cleo solutions.
Create, test, and deploy EDI maps for transactions such as 850, 810, 856, etc., and other EDI/X12/EDIFACT documents.
Configure trading partner setups, including communication protocols (AS2, SFTP, FTP, HTTPS).
Monitor EDI transaction flows, identify errors, troubleshoot, and implement fixes.
Collaborate with business analysts, ERP teams, and external partners to gather and analyze EDI requirements.
Document EDI processes, mappings, and configurations for ongoing support and knowledge sharing.
Provide timely support for EDI-related incidents, ensuring minimal disruption to business operations.
Participate in EDI onboarding projects for new trading partners and customers.
Position Summary:
As a CRM ETL Developer, you will be responsible for the analysis, transformation, and integration of data from legacy and external systems into CRM application. This includes developing ETL/ELT workflows, ensuring data quality through cleansing and survivorship rules, and supporting daily production loads. You will work in an Agile environment and play a vital role in building scalable, high-quality data integration solutions.
Key Responsibilities:
- Analyze data from legacy and external systems; develop ETL/ELT pipelines to ingest and process data.
- Cleanse, transform, and apply survivorship rules before loading into the CRM platform.
- Monitor, support, and troubleshoot production data loads (Tier 1 & Tier 2 support).
- Contribute to solution design, development, integration, and scaling of new/existing systems.
- Promote and implement best practices in data integration, performance tuning, and Agile development.
- Lead or support design reviews, technical documentation, and mentoring of junior developers.
- Collaborate with business analysts, QA, and cross-functional teams to resolve defects and clarify requirements.
- Deliver working solutions via quick POCs or prototypes for business scenarios.
Technical Skills:
- ETL/ELT Tools: 5+ years of hands-on experience in ETL processes using Siebel EIM.
- Programming & Databases: Strong SQL & PL/SQL development; experience with Oracle and/or SQL Server.
- Data Integration: Proven experience in integrating disparate data systems.
- Data Modelling: Good understanding of relational, dimensional modelling, and data warehousing concepts.
- Performance Tuning: Skilled in application and SQL query performance optimization.
- CRM Systems: Familiarity with Siebel CRM, Siebel Data Model, and Oracle SOA Suite is a plus.
- DevOps & Agile: Strong knowledge of DevOps pipelines and Agile methodologies.
- Documentation: Ability to write clear technical design documents and test cases.
Soft Skills & Attributes:
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal abilities.
- Experience working with cross-functional, globally distributed teams.
- Proactive mindset and eagerness to learn new technologies.
- Detail-oriented with a focus on reliability and accuracy.
Preferred Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Experience in Tier 1 & Tier 2 application support roles.
- Exposure to real-time data integration systems is an advantage.
The role reports to the Head of Customer Support, and the position holder is part of the Product Team.
Main objectives of the role
· Focus on customer satisfaction with the product and provide the first-line support.
Specialisation
· Customer Support
· SaaS
· FMCG/CPG
Key processes in the role
· Build extensive knowledge of our SAAS product platform and support our customers in using it.
· Supporting end customers with complex questions.
· Providing extended and elaborated answers on business & “how to” questions from customers.
· Participating in ongoing education for Customer Support Managers.
· Collaborate and communicate with the Development teams, Product Support and Customers
Requirements
· Bachelor’s degree in business, IT, Engineering or Economics.
· 4-8 years of experience in a similar role in the IT Industry.
· Solid knowledge of SaaS (Software as a Service).
· Multitasking is your second nature, and you have a proactive + Customer First mindset.
· 3+ years of experience providing support for ERP systems, preferably SAP.
· Familiarity with ERP/SAP integration processes and data migration.
· Understanding of ERP/SAP functionalities, modules and data structures.
· Understanding of technicalities like Integrations (API’s, ETL, ELT), analysing logs, identifying errors in logs, etc.
· Experience in looking into code, changing configuration, and analysing if it's a development bug or a product bug.
· Profound understanding of the support processes.
· Should know where to route tickets further and know how to manage customer escalations.
· Outstanding customer service skills.
· Knowledge of Fast-Moving Consumer Goods (FMCG)/ Consumer Packaged Goods (CPG) industry/domain is preferable.
Excellent verbal and written communication skills in the English language

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Job Description:
An Azure Data Engineer is responsible for designing, implementing, and maintaining pipelines and ETL/ ELT flow solutions on the Azure cloud platform. This role requires a strong understanding of migration database technologies and the ability to deploy and manage database solutions in the Azure cloud environment.
Key Skills:
· Min. 3+ years of Experience with data modeling, data warehousing, and building ETL pipelines.
· Must have a firm knowledge of SQL, NoSQL, SSIS SSRS, and ETL/ELT Concepts.
· Should have hands-on experience in Databricks, ADF (Azure Data Factory), ADLS, Cosmos DB.
· Excel in the design, creation, and management of very large datasets
· Detailed knowledge of cloud-based data warehouses, architecture, infrastructure components, ETL, and reporting analytics tools and environments.
· Skilled with writing, tuning, and troubleshooting SQL queries
· Experience with Big Data technologies such as Data storage, Data mining, Data analytics, and Data visualization.
· Should be familiar with programming and should be able to write and debug the code in any of the programming languages like Node, Python, C#, .Net, Java.
Technical Expertise and Familiarity:
- Cloud Technologies: Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse)
- Database: CosmosDB, Document DB
- IDEs: Visual Studio, VS Code, MS SQL Server
- Data Modelling,ELT, ETL Methodology
- Creating and managing ETL/ELT pipelines based on requirements
- Build PowerBI dashboards and manage datasets needed.
- Work with stakeholders to identify data structures needed for future and perform any transformations including aggregations.
- Build data cubes for real-time visualisation needs and CXO dashboards.
Required Tech Skills
- Microsoft PowerBI & DAX
- Python, Pandas, PyArrow, Jupyter Noteboks, ApacheSpark
- Azure Synapse, Azure DataBricks, Azure HDInsight, Azure Data Factory

• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
- 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.
Experience Range |
2 Years - 10 Years |
Function | Information Technology |
Desired Skills |
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
|
Education Type | Engineering |
Degree / Diploma | Bachelor of Engineering, Bachelor of Computer Applications, Any Engineering |
Specialization / Subject | Any Specialisation |
Job Type | Full Time |
Job ID | 000018 |
Department | Software Development |