DevOps Engineer - India
at Client of DMAIC - US based product Development Company
Required Competencies:
- 3+ years experience in automating application and database deployments using most of the above-mentioned technologies
- Strong experience in .NET and MS SQL
- Ability to quickly learn and implement new tools/technologies
- Ability to excel within an "Agile" environment
- Infrastructure automation is a plus
Roles and Responsibilities:
- Application Deployments - Azure DevOps YAML build pipelines and classic release pipelines, PowerShell and bash scripts, Docker containers
- Database Deployments - DACPAC
- SCM - BitBucket
- Infrastucure - Windows Servers, Linux Servers, SQL Server, Azure SQL and many more Azure resources
- Application Types - Web APIs, Web Forms, Windows Services, Task Scheduler Jobs, SQL Server Agent jobs
- Development/Test Stack - VueJS, .NET Framework, .NET Core, Python, TypeScript, PowerBI, SSIS, SQL Server, NUnit, XUnit, Selenium, Postman, Sentry
- Currently exploring ARM, Terraform and Pulumi for infrastructure automation
- Automate application/database builds and deployments and write scripts to automate repetitive tasks
- Optimize and improve existing builds/deployments
- Deploy applications/databases to different environments
- Setup/configure infrastructure on Azure
- Create/merge branches in git
- Help with debugging post-deployment issues
- Managing access to BitBucket, Sentry, VMs and Azure resources
Similar jobs
About RaRa Delivery
Not just a delivery company…
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Job Brief
The role is to coordinate strategies for defining, deploying, and designing a next-generation, cloud-based unified communications platform. This includes managing all engineering projects for VoIP initiatives, planning technology roadmaps, and configuring and optimizing all products and services, both internally and those integrated with Internet-based services.
Responsibilities:
- Provide ongoing support of the Stage Prod environments that are placed in public clouds;
- Improvement Observability of the Product and Infrastructure it placed in;
- Support integrations with other Products and collaborate with Teams owning them;
- Write high-quality documentation;
- Improve deployment process: CI/CD pipelines, automations and so on and so forth.
Requirements:
Technical Experience:
- Confident Linux administrator and common experience as administrator of services used by customers (internal or external);
- Monitoring Systems and Observability Tools: Prometheus Grafana; ELK;
- CI/CD experience: Git GitLab, Bazel or Jenkins;
- DevOps SRE practices understanding, including common toolset, approaches, deployment strategies et cetera;
- IaaC: HashiCorp Terraform, CloudFormation;
- Public Clouds -Networking, Containers. DNS, other common public clouds services: computing, storages, billing, user management and roles control (AWS);
- Docker Kubernetes: near to CKA level;
- Networks: TCP/IP, NAT/PAT, HTTP(s), DNS;
- Basic experience with databases administration (MySQL or PostgreSQL);
- Automations: Python in Linux Administration;
- Understanding of Change Incident management processes
Job Responsibilities:
Section 1 -
- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.
Section 2 -
- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.
Section 3 -
- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.
Section 4 -
- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.
Qualifications: Bachelors of Engineering / MCA Preferably with AWS, Cloud certification
Skills & Competencies
- Linux and Windows servers management and troubleshooting.
- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS
- Kubernetes and containers knowledge
- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)
- Understanding of serverless architecture. - High understanding of Networking concepts
- High understanding of Serverless architecture concept - Managing to monitor and alerting systems
- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs
- Good Project management skills
- Documentation skills
- Backup, and DR understanding
Soft Skills - Project management, Process Documentation
Ideal Candidate:
- AWS certification with between 2-4 years of experience with certification and project execution experience.
- Someone who is interested in building sustainable cloud architecture with automation on AWS.
- Someone who is interested in learning and being challenged on a day-to-day basis.
- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.
- Someone who is curious to analyze and solve complex problems.
- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.
Behavioral Traits
- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.
- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.
- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly
DevOps Engineer
at Sedin Technologies - RailsFactory
We’ve delivered 850+ projects for 600+ clients from various industries like Manufacturing, BFSI, eCommerce & Retail, Energy &Utilities, Healthcare & Life Science, and Media & entertainment.
We strongly believe in transparency which has been our key success factor in building long-term partnerships with our clients and employees.
Sedin: https://sedintechnologies.com/
RailsFactory: https://railsfactory.com/
Tarka Labs: https://tarkalabs.com/
Job Description:
A DevOps Engineer to accelerate the speed of Deployment Automation and help us achieve the dream of Ship Anywhere, Anytime! A typical DevOps engineer capable of building and maintaining a highly available and high resilient Deployment Systems and Infrastructure.
Requirements
Having Automation as your breathe
Someone with exceptional knowledge on AWS/GCP
Capable of Scripting in any scripting language
Capable of analyzing various tools and services and build systems using it for a requirement
Ability to interact with Developers and Infrastructure team and various stake holders
Ability to build and maintain a highly reliable and high availability environment.
Someone who have deep understanding of Operating Systems, preferably Linux
Someone who have work experience on Cloud Formation Templates, Terraforms etc
Someone who has work experience on configuration management tools like Ansible, Chef and Build tools like Maven, Gradle, Ant etc
Someone who has work experience on building CI / CD Pipeline for micro-services using Jenkins and Infrastructure as AWS
Having work experience on containerized deployment using Docker and Kubernetes
Having Strong work experience in implementing and maintaining ITIL Process
Having worked experience on monitoring tools like Datadog, New Relic, Grafana, Prometheus
Quick learner and Team Player
On a typical day, you might
Analyse and implement the CI / CD Pipeline of any microservice based out of any stack.
Designing and Implementing Tools for the Deployment process in using Python, Shell Scripting, DynamoDB, Postgres, Docker, Kubernetes etc.
Building POCs for any problem statements using open source tools, services provided by AWS, GCP.
Performing Enhancements and Modifications to the existing Codebase and Tools as per the requirement
DevOps Engineer/Azure
Job Description:
Responsibilities
· Having E2E responsibility for Azure landscape of our customers
· Managing to code release and operational tasks within a global team with a focus on automation, maintainability, security and customer satisfaction
· Make usage of CI/CD framework to rapidly support lifecycle management of the platform
· Acting as L2-L3 support for incidents, problems and service request
· Work with various Atos and 3rd party teams to resolve incidents and implement changes
· Implement and drive automation and self-healing solutions to reduce toil
· Enhance error budgets and hands on design and development of solutions to address reliability issues and/or risks
· Support ITSM processes and collaborate with service management representatives
Job Requirements
· Azure Associate certification or equivalent knowledge level
· 5+ years of professional experience
· Experience with Terraform and/or native Azure automation
· Knowledge of CI/CD concepts and toolset (i.e. Jenkins, Azure DevOps, Git)
· Must be adaptable to work in a varied, fast paced exciting, ever changing environment
· Good analytical and problem-solving skills to resolve technical issues
· Understanding of Agile development and SCRUM concepts a plus
· Experience with Kubernetes architecture and tools a plus
● Building and managing multiple application environments on AWS using automation tools like Terraform or
Cloudformation etc.
● Deploy applications with zero downtime via automation with configuration management tools such as Ansible.
● Setting up Infrastructure monitoring tools such as Prometheus, Grafana
● Setting up centralised logging using tools such as ELK.
● Containerisation of applications/microservices.
● Ensure application availability to 99.9% with highly available infrastructure.
● Monitoring performance of applications and databases.
● Ensuring that systems are safe and secure against cyber security threats.
● Working with software developers to ensure that release cycle and deployment processes are followed.
● Evaluating existing applications and platforms, give recommendations for enhancing performance via gap analysis,
identifying the most practical alternative solutions and assisting with modifications.
Skills -
● Strong knowledge of AWS Managed Services such as EC2, RDS, ECS, ECR, S3, Cloudfront, SES, Redshift, Elastic Cache,
AMQP etc.
● Experience in handling production workloads.
● Experience with Nginx web server.
● Experience with NoSql and Sql Databases such as MongoDB, Postgresql etc.
● Experience with Containerisation of applications/micro services using Docker.
● Understanding of system administration in Linux environments.
● Strong Knowledge of Infrastructure as a Code such as Terraform, Cloudformation etc.
● Strong knowledge of configuration management tools such as Ansible, Chef etc.
● Familiarity with tools such as GitLab, Jenkins, Vercel, JIRA etc.
● Proficiency in scripting languages including Bash, Python etc.
● Full understanding of software development lifecycle best practices and agile methodology
● Strong communication and documentation skills.
● An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail
● Excellent judgment, analytical thinking, and problem-solving skills
● Self-motivated individual that possesses excellent time management and organizational skills
Production Engineer
at Healthifyme
Responsibilities:
- The Production Engineer (PE) is responsible for managing, monitoring, and configuring the applications on staging and production systems.
- should be able to modify scripts, worked on python scripting OR Hands-on coding experience in python
- Together with your engineering team, will share an on-call rotation and be an escalation contact for service incidents
- Debugs and fix hard problems in live production
Requirements:
- 4+ years of industry or open-source experience
- Worked collaboratively with software development team
- Experience in Python, Django
- Experience in AWS Infrastructure
- Experience with relational databases and SQL
DevOps Engineer
at Magicflare Software Services
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support, implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
- Strong experience using Java programming languages or DevOps on Google Cloud.
- Strong communication skills.
- Experience in Agile methodologies
- Certification on Professional Google Cloud Data engineer will be an added advantage.
- Experience on Google Cloud Platform.
- Experience on Java or DevOps
Required Key Skills :
- Excellent verbal and written communication and interpersonal skills.
- Ability to work independently and within a team environment.
- Interpersonal skills
- GCP, Cloud, Programming
- Agile
- Java programming language or DevOps experience.
CTC- 4L - 7L
Senior Cloud Engineer
at This is for Product based organisation in Pune.
If you are looking for good opportunity in Cloud Development/Devops. Here is the right opportunity.
EXP: 4-10 YRs
Location:Pune
Job Type: Permanent
Minimum qualifications:
- Education: Bachelor-Master degree
- Proficient in English language.
Relevant experience:
- Should have been working for at least four years as a DevOps/Cloud Engineer
- Should have worked on AWS Cloud Environment in depth
- Should have been working in an Infrastructure as code environment or understands it very clearly.
- Has done Infrastructure coding using Cloudformation/Terraform and Configuration Management using Chef/Ansibleand Enterprise Bus(RabbitMQ/Kafka)
- Deep understanding of the microservice design and aware of centralized Caching(Redis), centralizedconfiguration(Consul/Zookeeper)