3+ DMS Jobs in India
Apply to 3+ DMS Jobs on CutShort.io. Find your next job, effortlessly. Browse DMS Jobs and apply today!
Job Title : Lead Database Engineer
Location : Gurgaon Sector-43
Experience Required : 4+ Years
Employment Type : Full-Time
Summary :
We are seeking a highly skilled Lead Database Engineer with expertise in managing and optimizing database systems, primarily focusing on Amazon Aurora PostgreSQL, MySQL, and NoSQL databases. The ideal candidate will have in-depth knowledge of AWS services, database architecture, performance tuning, and security practices.
Key Responsibilities :
1. Database Administration :
- Manage and administer Amazon Aurora PostgreSQL, MySQL, and NoSQL database systems to ensure high availability, performance, and security.
- Implement robust backup and recovery procedures to maintain data integrity.
2. Optimization and Performance:
- Develop and execute optimization strategies at the database, query, collection, and table levels.
- Proactively monitor performance and fine-tune RDS parameter groups for optimal database operations.
- Conduct root cause analysis and resolve complex database performance issues.
3. AWS Services and Architecture :
- Leverage AWS services such as RDS, Aurora, and DMS to ensure seamless database operations.
- Perform database version upgrades for PostgreSQL and MySQL, integrating new features and performance enhancements.
4. Replication and Scalability:
- Implement and manage various replication strategies, including master-master and master-slave replication, ensuring data consistency and scalability.
5. Security and Access Control:
- Manage user permissions and roles, maintaining strict security protocols and access controls.
6. Collaboration:
- Work closely with development teams to optimize database design and queries, aligning database performance with application requirements.
Required Skills :
- Strong Expertise: Amazon Aurora PostgreSQL, MySQL, and NoSQL databases.
- AWS Services: Experience with RDS, Aurora, and DMS.
- Optimization: Hands-on experience in query optimization, database tuning, and performance monitoring.
- Replication Strategies: Knowledge of master-master and master-slave replication setups.
- Problem Solving: Proven ability to troubleshoot and resolve complex database issues, including root cause analysis.
- Security: Strong understanding of data security and access control practices.
- Collaboration: Ability to work with cross-functional teams and provide database-related guidance.
Preferred Qualifications :
- Certification in AWS or database management tools.
- Experience with other NoSQL databases like MongoDB or Cassandra.
- Familiarity with Agile and DevOps methodologies.
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
Roles and
Responsibilities
Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to
help us in configure and develop new AWS environments for our Enterprise Data Lake,
migrate the on-premise traditional workloads to cloud. Must have a sound
understanding of BI best practices, relational structures, dimensional data modelling,
structured query language (SQL) skills, data warehouse and reporting techniques.
Extensive experience in providing AWS Cloud solutions to various business
use cases.
Creating star schema data models, performing ETLs and validating results with
business representatives
Supporting implemented BI solutions by: monitoring and tuning queries and
data loads, addressing user questions concerning data integrity, monitoring
performance and communicating functional and technical issues.
Job Description: -
This position is responsible for the successful delivery of business intelligence
information to the entire organization and is experienced in BI development and
implementations, data architecture and data warehousing.
Requisite Qualification
Essential
-
AWS Certified Database Specialty or -
AWS Certified Data Analytics
Preferred
Any other Data Engineer Certification
Requisite Experience
Essential 4 -7 yrs of experience
Preferred 2+ yrs of experience in ETL & data pipelines
Skills Required
Special Skills Required
AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.
Bigdata: Databricks, Spark, Glue and Athena
Expertise in Lake Formation, Python programming, Spark, Shell scripting
Minimum Bachelor’s degree with 5+ years of experience in designing, building,
and maintaining AWS data components
3+ years of experience in data component configuration, related roles and
access setup
Expertise in Python programming
Knowledge in all aspects of DevOps (source control, continuous integration,
deployments, etc.)
Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD
Hands on ETL development experience, preferably using or SSIS
SQL Server experience required
Strong analytical skills to solve and model complex business requirements
Sound understanding of BI Best Practices/Methodologies, relational structures,
dimensional data modelling, structured query language (SQL) skills, data
warehouse and reporting techniques
Preferred Skills
Required
Experience working in the SCRUM Environment.
Experience in Administration (Windows/Unix/Network/
plus.
Experience in SQL Server, SSIS, SSAS, SSRS
Comfortable with creating data models and visualization using Power BI
Hands on experience in relational and multi-dimensional data modelling,
including multiple source systems from databases and flat files, and the use of
standard data modelling tools
Ability to collaborate on a team with infrastructure, BI report development and
business analyst resources, and clearly communicate solutions to both
technical and non-technical team members