
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observability

Similar jobs
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
We are looking for very hands-on DevOps engineers with 3 to 6 years of experience. The person will be part of a team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
This person MUST have:
-
B.E Computer Science or equivalent
-
2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
-
1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
-
1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
-
1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
-
Experience configuring/maintaining one monitoring tool.
-
Excellent verbal & written communication skills.
-
Candidates with certifications - AWS, GCP, CKA, etc will be preferred
-
Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
-
Min 3 years of experience as DevOps automation engineer buildingg, running, and maintaining production sites.
-
Not looking for candidates who have experience only as L1/L2 or Build & Deploy.
Location: Remotely, anywhere in India.
Timings:
-
The person is expected to deliver with both high speed and high quality as well as work for 40 hours per week (~6.5 hours per day, 6 days per week) in shifts that will rotate every month.
Position:
-
Full time/Direct
-
We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
-
We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
We’re hiring a DevOps Engineer who’s passionate about automation, reliability, and scaling infrastructure for modern cloud-native applications. If you thrive in dynamic environments and love problem-solving at scale, we’d love to meet you!
🔧 Key Responsibilities
- Manage and support production systems with on-call rotations
- Deploy and maintain scalable infrastructure on AWS (ECS, EC2, EKS, S3, RDS, ELB, IAM, Lambda)
- Build infrastructure using Terraform
- Manage and monitor Kubernetes clusters and Docker containers
- Automate deployment and configuration using Ansible or similar tools
- Ensure systems reliability using robust monitoring and alerting tools
- Work with Linux OS, and network protocols like HTTP, DNS, SMTP, LDAP
- Manage services like Nginx, HAProxy, MySQL, SSH
- Collaborate with development, QA, and product teams
- Document systems and infrastructure best practices
✅ Required Skills
- 4+ years in DevOps, SRE, or Systems Administration
- Hands-on experience with AWS, Kubernetes, Docker
- Proficient with Terraform, Ansible, and Linux systems
- Strong understanding of networking, system logs, and debugging
- Excellent communication and documentation skills
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
Requirements
Core skills:
● Strong background in Linux / Unix Administration and
troubleshooting
● Experience with AWS (ideally including some of the following:
VPC, Lambda, EC2, Elastic Cache, Route53, SNS, Cloudwatch,
Cloudfront, Redshift, Open search, ELK etc.)
● Experience with Infra Automation and Orchestration tools
including Terraform, Packer, Helm, Ansible.
● Hands on Experience on container technologies like Docker,
Kubernetes/EKS, Gitlab and Jenkins as Pipeline.
● Experience in one or more of Groovy, Perl, Python, Go or
scripting experience in Shell.
● Good understanding of with Continuous Integration(CI) and
Continuous Deployment(CD) pipelines using tools like Jenkins,
FlexCD, ArgoCD, Spinnaker etc
● Working knowledge of key value stores, database technologies
(SQL and NoSQL), Mongo, mySQL
● Experience with application monitoring tools like Prometheus,
Grafana, APM tools like NewRelic, Datadog, Pinpoint
● Good exposure on middleware components like ELK, Redis, Kafka
and IOT based systems including Redis, NewRelic, Akamai,
Apache / Nginx, ELK, Grafana, Prometheus etc
Good to have:
● Prior experience in Logistics, Payment and IOT based applications
● Experience in unmanaged mongoDB cluster, automations &
operations, analytics
● Write procedures for backup and disaster recovery
Core Experience
● 3-5 years of hands-on DevOps experience
● 2+ years of hands-on Kubernetes experience
● 3+ years of Cloud Platform experience with special focus on
Lambda, R53, SNS, Cloudfront, Cloudwatch, Elastic Beanstalk,
RDS, Open Search, EC2, Security tools
● 2+ years of scripting experience in Python/Go, shell
● 2+ years of familiarity with CI/CD, Git, IaC, Monitoring, and
Logging tools
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
- Working on scalability, maintainability and reliability of company's products.
- Working with clients to solve their day-to-day challenges, moving manual processes to automation.
- Keeping systems reliable and gauging the effort it takes to reach there.
- Understanding Juxtapose tools and technologies to choose x over y.
- Understanding Infrastructure as a Code and applying software design principles to it.
- Automating tedious work using your favourite scripting languages.
- Taking code from the local system to production by implementing Continuous Integration and Delivery principles.
What you need to have:
- Worked with any one of the programming languages like Go, Python, Java, Ruby.
- Work experience with public cloud providers like AWS, GCP or Azure.
- Understanding of Linux systems and Containers
- Meticulous in creating and following runbooks and checklists
- Microservices experience and use of orchestration tools like Kubernetes/Nomad.
- Understanding of Computer Networking fundamentals like TCP, UDP.
- Strong bash scripting skills.
Contract to hire
Total 8 years of experience and relevant 4 years
• Experience with building and deploying software in the cloud, preferably on Google Cloud Platform (GCP)
• Sound knowledge to build infrastructure as code with Terraform
• Comfortable with test-driven development, testing frameworks and building CI/CD pipelines with version control software Gitlab
• Strong skills of containerisation with Docker, Kubernetes and Helm
• Familiar with Gitlab, systems integration and BDD
• Solid networking skills e.g. IP, DNS, VPN, HTTP/HTTPS
• Scripting experience (Bash, Python, etc.)
• Experience in Linux/Unix administration
• Experience with agile methods and practices (Scrum, Kanban, Continuous Integration, Pair Programming, TDD)
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days












