Greetings !!!
Looking Urgently !!!
Exp-Min 10 Years
Location-Delhi
Sal-nego
Role
AWS Data Migration Consultant
Provide Data Migration strategy, expert review and guidance on Data Migration from onprem to AWS infrastructure that includes AWS Fargate, PostgreSQL, DynamoDB. This includes review and SME inputs on:
· Data migration plan, architecture, policies, procedures
· Migration testing methodologies
· Data integrity, consistency, resiliency.
· Performance and Scalability
· Capacity planning
· Security, access control, encryption
· DB replication and clustering techniques
· Migration risk mitigation approaches
· Verification and integrity testing, reporting (Record and field level verifications)
· Schema consistency and mapping
· Logging, error recovery
· Dev-test, staging and production artifact promotions and deployment pipelines
· Change management
· Backup, DR approaches and best practices.
Qualifications
- Worked on mid to large scale data migration projects, specifically from on-prem to AWS, preferably in BFSI domain
- Deep expertise in AWS Redshift, PostgreSQL, DynamoDB from data management, performance, scalability and consistency standpoint
- Strong knowledge of AWS Cloud architecture and components, solutions, well architected frameworks
- Expertise in SQL and DB performance related aspects
- Solution Architecture work for enterprise grade BFSI applications
- Successful track record of defining and implementing data migration strategies
- Excellent communication and problem solving skills
- 10+ Yrs experience in Technology, at least 4+yrs in AWS and DBA/DB Management/Migration related work
- Bachelors degree or higher in Engineering or related field
Similar jobs
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
ROLE AND RESPONSIBILITIES
Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should
be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and
transform data into insights that drive business value, through use of data analytics, data visualization and data
modeling techniques.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
Technical Bachelor’s Degree.
Non-Technical Degree holders should have 1+ years of relevant experience.
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Working along with the highly motivated advanced Machine Learning team, with key responsibilities are to research, design, develop, and implement applications that will be integrated into our workflows. Responsibilities and Accountabilities:
I Experience:
III Skill Set & Personality Traits required:
|
As AWS Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts, and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision-making. You are passionate about the data quality of our business metrics and the flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Write efficient code in - PySpark, Amazon Glue
- Write SQL Queries in - Amazon Athena, Amazon Redshift
- Explore new technologies and learn new techniques to solve business problems creatively
- Collaborate with many teams - engineering and business, to build better data products and services
- Deliver the projects along with the team collaboratively and manage updates to customers on time
What are we looking for?
- 1 to 3 years of experience in Apache Spark, PySpark, Amazon Glue
- 2+ years of experience in writing ETL jobs using pySpark, and SparkSQL
- 2+ years of experience in SQL queries and stored procedures
- Have a deep understanding of all the Dataframe API with all the transformation functions supported by Spark 2.7+
You will be preferred if you have
- Prior experience in working on AWS EMR, Apache Airflow
- Certifications AWS Certified Big Data – Specialty OR Cloudera Certified Big Data Engineer OR Hortonworks Certified Big Data Engineer
- Understanding of DataOps Engineering
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
We are a fast-growing digital, cloud, and mobility services provider with a principal market being North
America. We are looking for talented database/SQL experts for the management and analytics of large
data in various enterprise projects.
Responsibilities
Translate business needs to technical specifications
Manage and maintain various database servers (backup, replicas, shards, jobs)
Develop and execute database queries and conduct analyses
Occasionally write scripts for ETL jobs.
Create tools to store data (e.g. OLAP cubes)
Develop and update technical documentation
Requirements
Proven experience as a database programmer and administrator
Background in data warehouse design (e.g. dimensional modeling) and data mining
Good understanding of SQL and NoSQL databases, online analytical processing (OLAP) and ETL
(Extract, transform, load) framework
Advance Knowledge of SQL queries, SQL Server Reporting Services (SSRS) and SQL Server
Integration Services (SSIS)
Familiarity with BI technologies (strong Tableu hands-on experience) is a plus
Analytical mind with a problem-solving aptitude
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
We are looking for a Senior Database Developer to provide a senior-level contribution to design, develop and implement critical business enterprise applications for marketing systems
- Play a lead role in developing, deploying and managing our databases (Oracle, My SQL and Mongo) on Public Clouds.
- Design and develop PL/SQL processes to perform complex ETL processes.
- Develop UNIX and Perl scripts for data auditing and automation.
- Responsible for database builds and change requests.
- Holistically define the overall reference architecture and manage its overall implementation in the production systems.
- Identify architecture gaps that can improve availability, performance and security for both productions systems and database systems and works towards resolving those issues.
- Work closely with Engineering, Architecture, Business and Operations teams to provide necessary and continuous feedback.
- Automate all the manual steps for the database platform.
- Deliver solutions for access management, availability, security, replication and patching.
- Troubleshoot application database performance issues.
- Participate in daily huddles (30 min.) to collaborate with onshore and offshore teams.
Qualifications:
- 5+ years of experience in database development.
- Bachelor’s degree in Computer Science, Computer Engineering, Math, or similar.
- Experience using ETL tools (Talend or Ab Initio a plus).
- Experience with relational database programming, processing and tuning (Oracle, PL/SQL, My SQL, MS SQL Server, SQL, TSQL).
- Familiarity with BI tools (Cognos, Tableau, etc.).
- Experience with Cloud technology (AWS, etc.).
- Agile or Waterfall methodology experience preferred.
- Experience with API integration.
- Advanced software development and scripting skills for use in automation and interfacing with databases.
- Knowledge of software development lifecycles and methodologies.
- Knowledge of developing procedures, packages and functions in a DW environment.
- Knowledge of UNIX, Linux and Service Oriented Architecture (SOA).
- Ability to multi-task, to work under pressure, and think analytically.
- Ability to work with minimal supervision and meet deadlines.
- Ability to write technical specifications and documents.
- Ability to communicate effectively with individuals at all levels in the company and with various business contacts outside of the company in an articulate, professional manner.
- Knowledge of CDP, CRM, MDM and Business Intelligence is a plus.
- Flexible work hours.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.