
Devops Architect
at Altimetrik
DevOps Architect
Experience: 10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.
Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired
Technical Skillset: Skills Proficiency level
- Build tools (Ant or Maven) - Expert
- CI/CD tool (Jenkins or Github CI/CD) - Expert
- Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
- Infrastructure As Code (Terraform, Helm charts etc.) - Expert
- Containerization (Docker, Docker Registry) - Expert
- Scripting (linux) - Expert
- Cluster deployment (Kubernetes) & maintenance - Expert
- Programming (Java) - Intermediate
- Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
- Artifactory (JFrog) - Expert
- Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
- Ansible, MySQL, PostgreSQL - Intermediate
• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)
Roles and Responsibilities
• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative

Similar jobs
Job Title: Cloud Engineer - Azure DevOps
Job Location: Mumbai (Andheri East)
About the company:
MIRACLE HUB CLIENT, is a predictive analytics and artificial intelligence company headquartered in Boston, US with offices across the globe. We build prediction models and algorithms to solve high priority business problems. Working across multiple industries, we have designed and developed breakthrough analytic products and decision-making tools by leveraging predictive analytics, AI, machine learning, and deep domain expertise
Skill-sets Required:
- Azure Architecture
- DevOps Expert
- Infrastructure as a code
- Automate CICD pipelines
- Security and Risk Compliance
- Validate Tech Design
Job Role:
- Create a well-informed cloud strategy and manage the adaption process and Azure based architecture
- Develop and organize automated cloud systems
- Work with other teams in continuous integration and continuous deployment pipeline in delivering solutions
- Work closely with IT security to monitor the company's cloud privacy
Desired Candidate Profile:
- Bachelor’s degree in computer science, computer engineering, or relevant field.
- A minimum of 3 years’ experience in a similar role.
- Strong knowledge of database structure systems and data mining.
- Excellent organizational and analytical abilities.
- Outstanding problem solver.
- IMMEDIATE JOINING (A notice period of 1 month is also acceptable)
- Excellent English communication and presentation skills, both verbal and written
- Charismatic, competitive and enthusiastic personality with negotiation skills
Compensation: 12-15 LPA with minimum 5 years of experience. ( OR AS PER LAST DRAWN )
Job Description
We are looking for an experienced software engineer with a strong background in DevOps and handling
traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product
requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous
Integration/Delivery/Deployment practices.
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
You are the right fit if you have:
1+ years of experience working on distributed systems and shipping high-quality product features on
schedule
Experience with Python including Object Oriented programming
Container administration and development utilizing Kubernetes, Docker, Mesos, or similar
Infrastructure automation through Terraform, Chef, Ansible, Puppet, Packer or similar
Knowledge of cloud compute technologies, network monitoring
Experience with Cloud Orchestration frameworks, development and SRE support of these systems
Experience with CI/CD pipelines including VCS (git, svn, etc), Gitlab Runners, Jenkins
Working with or supporting production, test, and development environments for medium to large user
environments
Installing and configuring application servers and database servers
Experience in developing scripts to automate software deployments and installations
Experience in a 247 high-availability production environmentAbility to come with best solution by capturing big picture instead of focusing on minor details. Root
cause analysis
Mandatory skills: Shell/Bash Scripting, Unix, Linux, Dockers, Kubernetes, AWS, Jenkins, GIT
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
Who We Are
Grip is building a new category of investment options for the new-age of Indian Investors. Millennials
don’t communicate, shop, pay, entertain and work like the previous generation - then why should they
invest the same way?
Started in June’20, Grip has seen 20% month-on-month growth to become one of India’s fastest-
growing destinations for alternative investments. Today, Grip offers multiple investment options providing
8-16% annual returns and 1-60 month tenures as the country’s only multi- asset (lease financing,
inventory financing, corporate bonds start-up equity, commercial real estate ) investment platform. With a
minimum investment size of INR 20,000, Grip is democratizing investment options that have previously
only been available to the largest funds and family offices.
Finance and technology (FinTech) is what we do, but people are at the core of our mission. From client-
facing roles to technology, and everywhere in between, you’ll work alongside a diverse team who loves
to solve problems, think creatively, and fly the plane as we continue to build it.
In the News
- Money Control : Click Here
- Hindu Business : Click Here
What We Can Offer You
● Young and fast-growing company with a healthy work-life balance
● Great culture based on the following core values
○ Courage
○ Transparency
○ Ownership
○ Customer Obsession
○ Celebration
● Lean structure and no micromanaging. You get to own your work
● The Company has turned two so you get a seat on rocketship that’s just taking off!
● High focus on Learning & Development and monetary support for relevant upskilling
● Competitive compensation along with equity ownership for wealth creation
What You’ll Do
● Design cloud infrastructure that is secure, scalable, and highly available onAWS
● Work collaboratively with software engineering to define infrastructure and
deploying requirements
● Provision, configure and maintain AWS cloud infrastructure defined as code
● Ensure configuration and compliance with configuration management tools
● Administer and troubleshoot Linux based systems
● Troubleshoot problems across a wide array of services and functional areas
● Build and maintain operational tools for deployment, monitoring, and analysis of AWS
infrastructure and systems
● Perform infrastructure cost analysis and optimization
Your Superpowers
● At least 3-5 years of experience building and maintaining AWS infrastructure
(VPC,EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront,
S3,RDS,Elasticbeanstalk)
● Strong understanding of how to secure AWS environments and meet
compliance requirements
● Expertise using Chef/Cloudformation/Ansible for configuration management
● Hands-on experience deploying and managing infrastructure with Terraform
● Solid foundation of networking and Linux administration
● Experience with Docker, GitHub, Kubernetes,, ELK and deploying applications onAWS
● Ability to learn/use a wide variety of open source technologies and tools
● Strong bias for action and ownership
- Collaborate with Dev, QA and Data Science teams on environment maintenance, monitoring (ELK, Prometheus or equivalent), deployments and diagnostics
- Administer a hybrid datacenter, including AWS and EC2 cloud assets
- Administer, automate and troubleshoot container based solutions deployed on AWS ECS
- Be able to troubleshoot problems and provide feedback to engineering on issues
- Automate deployment (Ansible, Python), build (Git, Maven. Make, or equivalent) and integration (Jenkins, Nexus) processes
- Learn and administer technologies such as ELK, Hadoop etc.
- A self-starter and enthusiasm to learn and pick up new technologies in a fast-paced environment.
Need to have
- Hands-on Experience in Cloud based DevOps
- Experience working in AWS (EC2, S3, CloudFront, ECR, ECS etc)
- Experience with any programming language.
- Experience using Ansible, Docker, Jenkins, Kubernetes
- Experience in Python.
- Should be very comfortable working in Linux/Unix environment.
- Exposure to Shell Scripting.
- Solid troubleshooting skills
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
About Us:
Varivas is a community that allows users to create, support, and recommend content. The mission of Varivas is to give community freedom, ease, and complete control of their content.
Varivas is an early-stage startup looking for its first few members.
https://www.varivas.community/
Become part of a core team of an upcoming startup building an exciting product from scratch.
Your Responsibilities:
- Write scalable backend code and audit existing backend code for sanity, performance and security
- Design and implementation of on-demand scalable, and performant applications.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Incidence management and root cause analysis
- Selecting and deploying appropriate CI/CD tools for various test automation frameworks
- Monitoring the processes during the entire lifecycle Encouraging and building automated processes wherever possible
- Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
- Achieve repeatability, fast recovery, best practices, and delegate proper ownership permissions to teams
- Analyzes the technology currently being used and develops plans and processes for improvement and expansion for better cost and efficiency
- Collaborate with and assist QA in performing load tests and assessing performance optimization
- writing and maintaining the DevOps processes in tools such as confluence
We are looking for someone who is:
- Ready to take complete ownership of the whole infra at Varivas
- Has previous experience in building and maintaining live production app
- You have an eye for detail.
- You’re a problem solver and a perpetual learner.
- You possess a positive and solution-oriented mindset.
- Good Problem-solving skills and troubleshooting
- Build high-performing web and native applications that are robust and easy to maintain
Our Current DevOps stack:
- Github/bitbucket
- Azure cloud storage
- MongoDB Atlas
- JIRA
- Selenium, cypress test automation
- Heroku/digital ocean
- GoDaddy
- Google Analytics/search console
- SendGrid
- sentry.io
- hotjar
What we can offer:
- Payment for work done (of course 🙂)
- Remote work from anywhere
- part time or full time engagement
- Complete flexibility of working hours
- Complete freedom and ownership of your work
- No meetings, standups, or daily status group calls. (We prefer asynchronous communication like slack)


Cloud native technologies - Kubernetes (EKS, GKE, AKS), AWS ECS, Helm, CircleCI, Harness, Severless platforms (AWS Fargate etc.)
Infrastructure as Code tools - Terraform, CloudFormation, Ansible
Scripting - Python, Bash
Desired Skills & Experience:
Projects/Internships with coding experience in either of Javascript, Python, Golang, Java etc.
Hands-on scripting and software development fluency in any programming language (Python, Go, Node, Ruby).
Basic understanding of Computer Science fundamentals - Networking, Web Architecture etc.
Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer, Jenkins etc.
Bonus points if you have contributed to open source projects, participated in competitive coding platforms like Hackerearth, CodeForces, SPOJ etc.
You’re willing to learn various new technologies and concepts. The “cloud-native” field of software is evolving fast and you’ll need to quickly learn new technologies as required.
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
B.E/B.Tech/M.Tech or equivalent experience.

- Degree in Computer Science or related discipline.
- AWS Certified Solutions Architect certification required
- 5+ years of architecture, design, implementation, and support of highly complex solutions (i.e. having an architectural sense for ensuring security and compliance, availability, reliability, etc.)
- Deep technical experience in serverless AWS infrastructure
- Understanding of cloud automation and orchestration tools and techniques including git, terraform, ARM or equivalent
- Create Technical Design documents, understand technical designs and translate into the application requirements.
- Exercise independent judgment in evaluating alternative technical solutions
- Participate in code and design review process
- Write unit test cases for quality check of the deliverables
- Ability to work closely with others in a team environment as well as independently
- Proven ability to problem solve and troubleshoot
- Excellent verbal and written communication skills and the ability to interact professionally with a diverse group, executives, managers, and subject matter experts
- Excellent English communication skills are required
We are looking for a Solution Architect with at least 5 years’ experience working on the following to join our growing team:
- AWS
- Postgresql
- EC2 on AWS
- Cognito
- and most importantly Serverless
You will need a strong technical AWS background focused on architecting on serverless (eg Lambda) AWS infrastructure.


