Cutshort logo
DMS Jobs in Delhi, NCR and Gurgaon

11+ DMS Jobs in Delhi, NCR and Gurgaon | DMS Job openings in Delhi, NCR and Gurgaon

Apply to 11+ DMS Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest DMS Job opportunities across top companies like Google, Amazon & Adobe.

icon
A fast growing Big Data company
Noida, Bengaluru (Bangalore), Chennai, Hyderabad
6 - 8 yrs
₹10L - ₹15L / yr
AWS Glue
SQL
skill iconPython
PySpark
Data engineering
+6 more

AWS Glue Developer 

Work Experience: 6 to 8 Years

Work Location:  Noida, Bangalore, Chennai & Hyderabad

Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops, 

Job Reference ID:BT/F21/IND


Job Description:

Design, build and configure applications to meet business process and application requirements.


Responsibilities:

7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.


Technical Experience:

Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.


➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.

➢ Create data pipeline architecture by designing and implementing data ingestion solutions.

➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.

➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.

➢ Author ETL processes using Python, Pyspark.

➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.

➢ ETL process monitoring using CloudWatch events.

➢ You will be working in collaboration with other teams. Good communication must.

➢ Must have experience in using AWS services API, AWS CLI and SDK


Professional Attributes:

➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.

➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.

➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.


Qualification:

➢ Degree in Computer Science, Computer Engineering or equivalent.


Salary: Commensurate with experience and demonstrated competence

Read more
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹8L - ₹13L / yr
Tableau
SQL
skill iconPython
Microsoft Excel
skill iconData Analytics
+1 more

Job Title

Data Analyst

 

Job Brief

The successful candidate will turn data into information, information into insight and insight into business decisions.

 

Data Analyst Job Duties

Data analyst responsibilities include conducting full lifecycle analysis to include requirements, activities and design. Data analysts will develop analysis and reporting capabilities. They will also monitor performance and quality control plans to identify improvements.

 

Responsibilities

● Interpret data, analyze results using statistical techniques and provide ongoing reports.

● Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality.

● Acquire data fromprimary orsecondary data sources andmaintain databases/data systems.

● Identify, analyze, and interpret trends orpatternsin complex data sets.

● Filter and “clean” data by reviewing computerreports, printouts, and performance indicatorsto locate and correct code problems.

● Work withmanagementto prioritize business and information needs.

● Locate and define new processimprovement opportunities. 

 

Requirements

● Proven working experienceas aData Analyst or BusinessDataAnalyst.

● Technical expertise regarding data models, database design development, data mining and segmentation techniques.

● Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks).

● Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc).

● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.

● Adept atqueries,reportwriting and presenting findings.

 

Job Location SouthDelhi, New Delhi 

Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
₹7L - ₹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
skill iconMachine Learning (ML)
skill iconData Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
Extramarks

at Extramarks

4 recruiters
Prachi Sharma
Posted by Prachi Sharma
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
3 - 5 yrs
₹8L - ₹10L / yr
Tableau
PowerBI
skill iconData Analytics
SQL
skill iconPython

Required Experience

· 3+ years of relevant technical experience as a data analyst role

· Intermediate / expert skills with SQL and basic statistics

· Experience in Advance SQL

· Python programming- Added advantage

· Strong problem solving and structuring skills

· Automation in connecting various sources to the data and representing it through various dashboards

· Excellent with Numbers and communicate data points through various reports/templates

· Ability to communicate effectively internally and outside Data Analytics team

· Proactively take up work responsibilities and take adhocs as and when needed

· Ability and desire to take ownership of and initiative for analysis; from requirements clarification to deliverable

· Strong technical communication skills; both written and verbal

· Ability to understand and articulate the "big picture" and simplify complex ideas

· Ability to identify and learn applicable new techniques independently as needed

· Must have worked with various Databases (Relational and Non-Relational) and ETL processes

· Must have experience in handling large volume and data and adhere to optimization and performance standards

· Should have the ability to analyse and provide relationship views of the data from different angles

· Must have excellent Communication skills (written and oral).

· Knowing Data Science is an added advantage

Required Skills

MYSQL, Advanced Excel, Tableau, Reporting and dashboards, MS office, VBA, Analytical skills

Preferred Experience

· Strong understanding of relational database MY SQL etc.

· Prior experience working remotely full-time

· Prior Experience working in Advance SQL

· Experience with one or more BI tools, such as Superset, Tableau etc.

· High level of logical and mathematical ability in Problem Solving

Read more
InnovAccer

at InnovAccer

3 recruiters
Jyoti Kaushik
Posted by Jyoti Kaushik
Noida, Bengaluru (Bangalore), Pune, Hyderabad
4 - 7 yrs
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
7 - 8 yrs
₹15L - ₹16L / yr
Data steward
MDM
Tamr
Reltio
Data engineering
+7 more
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
Octro Inc

at Octro Inc

1 recruiter
Reshma Suleman
Posted by Reshma Suleman
Noida, NCR (Delhi | Gurgaon | Noida)
1 - 7 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython

Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. 

 

They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. 

 

They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Responsibilities :

- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.

- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.

- Assess the effectiveness and accuracy of new data sources and data gathering techniques.

- Develop custom data models and algorithms to apply to data sets.

- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.

- Develop various A/B testing frameworks and test model qualities.

- Coordinate with different functional teams to implement models and monitor outcomes.

- Develop processes and tools to monitor and analyze model performance and data accuracy.

Qualifications :

- Strong problem solving skills with an emphasis on product development and improvement.

- Advanced knowledge of SQL and its use in data gathering/cleaning.

- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.

- Experience working with and creating data architectures.

- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.

- Excellent written and verbal communication skills for coordinating across teams.

Read more
One Labs

at One Labs

1 recruiter
Rahul Gupta
Posted by Rahul Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹3L - ₹6L / yr
skill iconData Science
skill iconDeep Learning
skill iconPython
Keras
TensorFlow
+1 more

Job Description


We are looking for a data scientist that will help us to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. 

Responsibilities

  • Selecting features, building and optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Extending company’s data with third party sources of information when needed
  • Enhancing data collection procedures to include information that is relevant for building analytic systems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Doing ad-hoc analysis and presenting results in a clear manner
  • Creating automated anomaly detection systems and constant tracking of its performance

Skills and Qualifications

  • Excellent understanding of machine learning techniques and algorithms, such as Linear regression, SVM, Decision Forests, LSTM, CNN etc.
  • Experience with Deep Learning preferred.
  • Experience with common data science toolkits, such as R, NumPy, MatLab, etc. Excellence in at least one of these is highly desirable
  • Great communication skills
  • Proficiency in using query languages such as SQL, Hive, Pig 
  • Good applied statistics skills, such as statistical testing, regression, etc.
  • Good scripting and programming skills 
  • Data-oriented personality
Read more
Cemtics

at Cemtics

1 recruiter
Tapan Sahani
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
skill iconPython
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Sagacito

at Sagacito

2 recruiters
Neha Verma
Posted by Neha Verma
NCR (Delhi | Gurgaon | Noida)
8 - 15 yrs
₹18L - ₹35L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconDeep Learning
•Analytics, Big Data, Machine Learning (including deep learning methods): Algorithm design, analysis and development and performance improvement o Strong understanding of statistical and predictive modeling concepts, machine-learning approaches, clustering, classification, regression techniques, and recommendation (collaborative filtering) algorithms Share CV to me at
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort