22+ AWS CloudFormation Jobs in Delhi, NCR and Gurgaon | AWS CloudFormation Job openings in Delhi, NCR and Gurgaon
Apply to 22+ AWS CloudFormation Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest AWS CloudFormation Job opportunities across top companies like Google, Amazon & Adobe.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–8 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
CTC: up to 40 LPA
Mandatory Criteria (Can't be neglected during screening) :
Need candidates from Growing startups or Product based companies only
1. 4–6 years experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
If interested kindly share your updated resume at 82008 31681
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Roles and Responsibilities
• Ability to create solution prototype and conduct proof of concept of new tools.
• Work in research and understanding of new tools and areas.
• Clearly articulate pros and cons of various technologies/platforms and perform
detailed analysis of business problems and technical environments to derive a
solution.
• Optimisation of the application for maximum speed and scalability.
• Work on feature development and bug fixing.
Technical skills
• Must have knowledge of the networking in Linux, and basics of computer networks in
general.
• Must have intermediate/advanced knowledge of one programming language,
preferably Python.
• Must have experience of writing shell scripts and configuration files.
• Should be proficient in bash.
• Should have excellent Linux administration capabilities.
• Working experience of SCM. Git is preferred.
• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.
• Understanding of Architecture of OpenStack/Kubernetes is a plus.
• Code contributed to OpenStack/Kubernetes community will be a plus.
• Data Center network troubleshooting will be a plus.
• Understanding of NFV and SDN domain will be a plus.
Soft skills
• Excellent verbal and written communications skills.
• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility
• Strong customer focus - Decent Networking and relationship management
• Flair for creativity and innovation
• Strategic thinking This is an individual contributor role and will need client interaction on technical side.
Must have Skill - Linux, Networking, Python, Cloud
Additional Skills-OpenStack, Kubernetes, Shell, Java, Development
Requirements:
• New account acquisition and achievement of sales targets in the US/European market.
• Prospecting, contacting target C and D level executives, building close relationships, closing sales from remote.
• Develop deep, robust and qualified sales pipeline.
• Planning and overseeing new marketing initiatives.
• Developing quotes and proposals for clients.
• Contacting potential clients to establish rapport and arrange meetings.
Required Candidate profile:
• Good understanding of IT/Software services in Outsourced Managed IT services and cyber security. Good knowledge of selling services.
• Familiar with the trends and ability create sales opportunities.
• Proficient to work on Tenders and RFP’s.
• Bachelor’s degree in business, marketing, or related field.
• Excellent organizational skills.
• Strong communication skills and IT fluency. • Experience in sales, marketing, or related field
About the Company :
Our Client enables enterprises in their digital transformation journey by offering Consulting & Implementation Services related to Data Analytics &Enterprise Performance Management (EPM).
Our Cleint deliver the best-suited solutions to our customers across industries such as Retail & E-commerce, Consumer Goods, Pharmaceuticals & Life Sciences, Real Estate & Senior Housing, Hi-tech, Media & Telecom as Manufacturing and Automotive clientele.
Our in-house research and innovation lab has conceived multiple plug-n-play apps, toolkits and plugins to streamline implementation and faster time-to-market
Job Title– AWS Developer
Notice period- Immediate to 60 days
Experience – 3-8
Location - Noida, Mumbai, Bangalore & Kolkata
Roles & Responsibilities
- Bachelor’s degree in Computer Science or a related analytical field or equivalent experience is preferred
- 3+ years’ experience in one or more architecture domains (e.g., business architecture, solutions architecture, application architecture)
- Must have 2 years of experience in design and implementation of cloud workloads in AWS.
- Minimum of 2 years of experience handling workloads in large-scale environments. Experience in managing large operational cloud environments spanning multiple tenants through techniques such as Multi-Account management, AWS Well Architected Best Practices.
- Minimum 3 years of microservice architectural experience.
- Minimum of 3 years of experience working exclusively designing and implementing cloud-native workloads.
- Experience with analysing and defining technical requirements & design specifications.
- Experience with database design with both relational and document-based database systems.
- Experience with integrating complex multi-tier applications.
- Experience with API design and development.
- Experience with cloud networking and network security, including virtual networks, network security groups, cloud-native firewalls, etc.
- Proven ability to write programs using an object-oriented or functional programming language such as Spark, Python, AWS Glue, Aws Lambda
Job Specification
*Strong and innovative approach to problem solving and finding solutions.
*Excellent communicator (written and verbal, formal and informal).
*Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
*Ability to multi-task under pressure and work independently with minimal supervision.
Regards
Team Merito
Looking for a AWS Architect for our client branch in Noida
Experience: 8+ years of relevant experience
Location: Noida
Notice period : Immediate to 30 days joiners
Must have: AWS, Spark, Python, AWS Glue, AWS S3, AWS Redshift, Sage Maker, AppFlow, Data Lake Formation, AWS Security, Lambda, EMR, SQL
Good to have:
Cloud Formation, VPC, Java, Snowflake, Talend, RDS, Data Bricks
Responsibilities:
- Mentor Data Architect & Team and also able to Architect and implement ETL and data movement solutions like ELT/ETL/Data pipelines.
- Apply industry best practices, security, manage compliance, adopt appropriate design patterns etc
- Able to work and manage tests, monitors, manage and validate data warehouse activity including data extraction, transformation, movement, loading, cleansing, and updating processes.
- Able to code, test and dataset implementation and maintain and define data engineering standards
- Architect, Design and implement database solutions related to Big Data, Real-time/ Data Ingestion, Data Warehouse, SQL/NoSQL
- Architect, Design and implement automated workflows and routines using workflow scheduling tools
- Identify and Automate to optimize the performance.
- Conduct appropriate functional and performance testing to identify bottlenecks and data quality issues.
- Be able to implement slowly changing dimensions as well as transaction, accumulating snapshot, and periodic snapshot fact tables.
- Collaborate with business users, EDW team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse.
Role –Infra Management Tech Consultant
This is for you, if you:
- Love technology and have an innate interest in coding and building products.
- Aren’t afraid of taking ownership and accountability. You love challenges and won’t stop till you haven’t found a solution.
- Possess knowledge of Software development industry best practices.
- Have hands-on experience and expertise on the technical competencies needed for the job, so that you can work independently.
- Are passionate about the travel/hospitality space and can understand the nuances of hotel operations.
- Know how to have fun and smile often
Certifications Required
- AWS DevOps Engineer (professional) - equivalent or higher
- AWS Solution Architect (professional) - equivalent or higher
- AWS Developer (associate) - equivalent or higher
What will you do?
- Responsible for infrastructure for all engineering activities, manage and oversee.
- Creation & maintenance of Dev, QA, UAT, CS, Sales, Prod (cloud + private).
- Backup & Restore underlying resources for failover.
- Manage connectivity, access & security for all environments.
- Upgrade environment for OS / software / service updates, certificates & patches.
- Upgrade environments for architecture upgrades.
- Upgrade environments for cost optimisations.
- Perform Release activity for all environments.
- Monitor (& setup alerts for) underlying resources for degradation of services.
- Automate as many infra activities as possible.
- Ensure Apps are healthy - log rotation, alarms, failover, scaling, recovery.
- Ensure DBs are healthy - size, indexing, fragmentation, failover, scaling, recovery.
- Assist in debugging production issues, engage with Development team to identify & apply long term fixes.
- Explore source code, databases, logs, and traces to find or create solution.
Technical Competencies you’ll possess:
- E./B.Tech/MCA or equivalent
- 3+ years of meaningful work experience in Dev ops handling complex services
- AWS DevOps Engineer/Architect/Developer certification
- Expertise with AWS services including but not limited to EC2, ECS, S3, RDS,Lambda,VPC,OpsWork,CloudFront,Route53,CodeDeploy,SQS,SNS.
- Hands on experience in maintaining production databases - including creatingqueries for identifying bottlenecks, creating/maintaining indexes where required, de-fragmenting db
- Handsonexperience(andstrongunderstanding)onLinux&WindowsbasedOS
- Expertise with Docker and related tools
- HandsonexperiencewithinfrastructureascodetoolslikeAWSCloudFormation/Terraformandconfigurationmanagementtoolslike PuppetorChef
- Strong grasp of a modern stack protocols / technologies like –
- Headers, Caching
- IP/TCP/HTTP(S), WebSockets, SSL/TLS
- CDNs, DNS, proxies
- Expertise in setting-up and maintaining a modern stack like (on AWS & cloud)
- Version Control Systems like TFS, SVN, Gitlab servers, etc
- Application & proxy servers like Nginx, HAproxy, IIS, etc
- Database servers like MSSQL, MySQL, Postgres, Mongo, Redis, Elasticsearchetc
- Monitoring tools like Grafana, Zabbix, Influx, Prometheus, etc
- Experience in Python coding preferred
- Good understanding and experience in continuous integration/continuous deployment tools
Regards
Team Merito
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Job Description:
We are looking to recruit engineers with zeal to learn cloud solutions using Amazon Web Services (AWS). We\'ll prefer an engineer who is passionate about AWS Cloud technology, passionate about helping customers succeed, passionate about quality and truly enjoys what they do. The qualified candidate for AWS Cloud Engineer position is someone who has a can-do attitude and is an innovative thinker.
- Be a hands on with responsibilities for the installation, configuration, and ongoing management of Linux based solutions on AWS for our clients.
- Responsible for creating and managing Autoscaling EC2 instances using VPCs, Elastic Load Balancers, and other services across multiple availability zones to build resilient, scalable and failsafe cloud solutions.
- Familiarity with other AWS services such as CloudFront, ALB, EC2, RDS, Route 53 etc. desirable.
- Working Knowledge of RDS, Dynamo DB, Guard Duty, WAF, Multi tier architecture.
- Proficient in working on Git, CI CD Pipelined, AWS Devops, Git, Bit Bucket, Ansible.
- Proficient in working on Docker Engine, Containers, Kubernetes .
- Expertise in Migration workload to AWS from different cloud providers
- Should be versatile in problem solving and resolve complex issues ranging from OS and application faults to creatively improving solution design
- Should be ready to work in rotation on a 24x7 schedule, and be available on call at other times due to the critical nature of the role
- Fault finding, analysis and of logging information for reporting of performance exceptions
- Deployment, automation, management, and maintenance of AWS cloud-based production system.
- Ensuring availability, performance, security, and scalability of AWS production systems.
- Management of creation, release, and configuration of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Pre-production acceptance testing for quality assurance.
- Provision of critical system security by leveraging best practices and prolific cloud security solutions.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Designing, maintenance and management of tools for automation of different operational processes.
Desired Candidate Profile
o Customer oriented personality with good communication skills, who is able to articulate and communicate very effectively verbally as well as in written communications.
o Be a team player that collaborates and shares experience and expertise with the rest of the team.
o Understands database system such as MSSQL, Mongo DB, MySQL, MariaDB, Dynamo DB, RDS.
o Understands Web Servers such as Apache, Ningx.
o Must be RHEL certified.
o In depth knowledge of Linux Commands and Services.
o Efficiency enough to manage all internet applications inclusive FTP, SFTP, Ningx Apache, MySQL, PHP.
o Good communication skill.
o Atleast 3-7 Years of experience in AWS and Devops.
Company Profile:
i2k2 Networks is a trusted name in the IT cloud hosting services industry. We help enterprises with cloud migration, cost optimization, support, and fully managed services which helps them to move faster and scale with lower IT costs. i2k2 Networks offers a complete range of cutting-edge solution that drives the Internet-powered business modules. We excel in:
- Managed IT Services
- Dedicated Web Servers Hosting
- Cloud Solutions
- Email Solutions
- Enterprise Services
- Round the clock Technical Support
https://www.i2k2.com/">https://www.i2k2.com/
Regards
Nidhi Kohli
i2k2 Networks Pvt Ltd.
AM - Talent Acquisition
Rules & Responsibilities:
- Design, implement and maintain all AWS infrastructure and services within a managed service environment
- Should be able to work on 24 X 7 shifts for support of infrastructure.
- Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
- Manage the production deployment and deployment automation
- Implement process and quality improvements through task automation
- Institute infrastructure as code, security automation and automation or routine maintenance tasks
- Experience with containerization and orchestration tools like docker, Kubernetes
- Build, Deploy and Manage Kubernetes clusters thru automation
- Create and deliver knowledge sharing presentations and documentation for support teams
- Learning on the job and explore new technologies with little supervision
- Work effectively with onsite/offshore teams
Qualifications:
- Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
- Experience in designing, implementing, and maintaining all AWS infrastructure and services
- Design and implement availability, scalability, and performance plans for the AWS managed service environment
- Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
- Hands-on technical expertise in Security Architecture, automation, integration, and deployment
- Familiarity with compliance & security standards across the enterprise IT landscape
- Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
- Solid understanding of AWS IAM Roles and Policies
- Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
- Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
- Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
- Experience in managing and working with the offshore teams
- Familiarity with CI/CD systems such as Jenkins, GitLab CI
- Scripting experience (Python, Bash, etc.)
- AWS, Kubernetes Certification is preferred
- Ability to work with and influence Engineering teams
Position: DevOps Engineer
Job Description
The candidate should have the following Skills:
- Hands-on experience with DevOps & CICD open source tools (Jenkins, ), including AWS DevOps services (CodePipeline, CloudFormation, etc).
- Experience in building and deploying using Java/Python/Node.js on Cloud infrastructure (Docker or Kubernetes containers or Lambda.)
- Exposure to Cloud operations, releases, and configuration management
- Experience in implementing Non-functional requirements for microservices, including performance, security, compliance, HA and Disaster Recovery.
- Good soft skills, great attitude, and passion for working in a product startup environment
Total Experience of 2-5 years post BE or BTech or MCA in Computer Science Engineering.
Designation : Technical Lead/ Architect – Java
Experience: 6+ yrs
Location: Noida
Skills- Must have: Architect exp, design, Strong on Java, cloud, latest tech stacks, hands-on, problem-solving, communication skills, client handling, sprint planning & execution, micro services, spring
Tech Skills Required:-
-7+ years of design/implementation/consulting experience with distributed applications.
- Experience in infrastructure architecture, database architecture, and networking
- Experience architecting/deploying/operating solutions
- Experience migrating or transforming legacy customer solutions to the cloud
- Working experience on Spring boot / similar frameworks using Java8/11
- Exposure to REST services, web sockets, SOAP services.
- Databases MySQL, PostgreSQL, NoSQL like MongoDB, Cassandra
- Queuing Systems like Rabbit MQ, ActiveMQ, Kafka
- Implemented MicroServices using design patterns like service discovery, circuit breakers, API Gateway, open-tracing.
- Experience in Security standards like OAUTH2.0, UMA, OpenID-connect.
- Hands-on knowledge on container tools like docker, podman
- Excellent in Problem-solving & solutioning
- Excellent communication skills
- Working knowledge on either of the Top 3 cloud solutions
Good to have:-
- gRPC
- Kubernetes/Openshift
- Pivotal Cloud Foundry, PKS
- GitHub profile with commit history.
- Certification equivalent to Solutions Architect.






