
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.

Similar jobs
Job Description
Experience: 5 - 9 years
Location: Bangalore/Pune/Hyderabad
Work Mode: Hybrid(3 Days WFO)
Senior Cloud Infrastructure Engineer for Data Platform
The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.
Key Responsibilities:
Cloud Infrastructure Design & Management
Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.
Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.
Optimize cloud costs and ensure high availability and disaster recovery for critical systems
Databricks Platform Management
Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.
Automate cluster management, job scheduling, and monitoring within Databricks.
Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.
CI/CD Pipeline Development
Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.
Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.
Monitoring & Incident Management
Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.
Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.
Security & Compliance
Enforce security best practices, including identity and access management (IAM), encryption, and network security.
Ensure compliance with organizational and regulatory standards for data protection and cloud operations.
Collaboration & Documentation
Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.
Maintain comprehensive documentation for infrastructure, processes, and configurations.
Required Qualifications
Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
Must Have Experience:
6+ years of experience in DevOps or Cloud Engineering roles.
Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.
Hands-on experience with Databricks for data engineering and analytics.
Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.
Strong scripting skills in Python, or Bash.
Experience with containerization and orchestration tools like Docker and Kubernetes.
Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
Soft Skills:
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Candidate must be from a product-based company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members atleast)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation
Senior Software Engineer I - DevOps Engineer
Exceptional software engineering is challenging. Amplifying it to ensure that multiple teams can concurrently create and manage a vast, intricate product escalates the complexity. As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your task will be to develop and sustain automated tooling for the release processes of all our services. You will contribute significantly to establishing automated delivery pipelines, empowering autonomous teams to create independently deployable services. Your role is integral to our overarching strategy of enhancing software delivery and progressing Sumo Logic’s internal Platform-as-a-Service.
What you will do:
• Own the Delivery pipeline and release automation framework for all Sumo services
• Educate and collaborate with teams during both design and development phases to ensure best practices.
• Mentor a team of Engineers (Junior to Senior) and improve software development processes.
• Evaluate, test, and provide technology and design recommendations to executives.
• Write detailed design documents and documentation on system design and implementation.
• Ensuring the engineering teams are set up to deliver quality software quickly and reliably.
• Enhance and maintain infrastructure and tooling for development, testing and debugging
What you already have
• B.S. or M.S. Computer Sciences or related discipline
• Ability to influence: Understand people’s values and motivations and influence them towards making good architectural choices.
• Collaborative working style: You can work with other engineers to come up with good decisions.
• Bias towards action: You need to make things happen. It is essential you don’t become an inhibitor of progress, but an enabler.
• Flexibility: You are willing to learn and change. Admit past approaches might not be the right ones now.
Technical skills:
- 4+ years of experience in the design, development, and use of release automation tooling, DevOps, CI/CD, etc.
- 2+ years of experience in software development in Java/Scala/Golang or similar
- 3+ years of experience on software delivery technologies like jenkins including experience writing and developing CI/CD pipelines and knowledge of build tools like make/gradle/npm etc.
- Experience with cloud technologies, such as AWS/Azure/GCP
- Experience with Infrastructure-as-Code and tools such as Terraform
- Experience with scripting languages such as Groovy, Python, Bash etc.
- Knowledge of monitoring tools such as Prometheus/Grafana or similar tools
- Understanding of GitOps and ArgoCD concepts/workflows
- Understanding of security and compliance aspects of DevSecOps
About Us
Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com.
Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection.

We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security
ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business. The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.
ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.
Must have-
- Knowledge of Docker
- Knowledge of Terraforms
- Knowledge of AWS
Good to have -
- Kubernetes
- Scripting language: PHP/Go Lang and Python
- Webserver knowledge
- Logging and monitoring experience
- Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
- Build and maintain highly available production systems.
- Must know how to choose the best tools and technologies which best fits the business needs.
- Develop software to integrate with internal back-end systems.
- Investigate and resolve technical issues.
- Problem-solving attitude.
- Ability to automate test and deploy the code and monitor.
- Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
- Lead and guide the team in identifying and implementing new technologies.
Skills that will help you build a success story with us
- An ability to quickly understand and solve new problems
- Strong interpersonal skills
- Excellent data interpretation
- Context-switching
- Intrinsically motivated
- A tactical and strategic track record for delivering research-driven results
Quick Glances:
- https://www.apnacomplex.com/why-apnacomplex">What to look for at ApnaComplex
- https://www.linkedin.com/company/1070467/admin/">Who are we A glimpse of ApnaComplex, know us better
- https://www.apnacomplex.com/media-buzz">ApnaComplex - Media – Visit our media page
ANAROCK Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
JOB DETAILS
What You'll Do
MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
• Design cloud infrastructure that is secure, scalable, and highly available on AWS
• Define infrastructure and deployment requirements
• Provision, configure and maintain AWS cloud infrastructure defined as code
• Ensure configuration and compliance with configuration management tools
• Troubleshoot problems across a wide array of services and functional areas
• Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
• Perform infrastructure cost analysis and optimization
Qualifications:
• At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
• Strong understanding of how to secure AWS environments and meet compliance requirements
• Expertise on configuration management
• Hands-on experience deploying and managing infrastructure with Terraform
• Solid foundation of networking and Linux administration
• Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
• Ability to learn/use a wide variety of open source technologies and tools
• Strong bias for action and ownership
You will be responsible for
1. Setting up, maintaining cloud (AWS/GCP/Azure) and kubernetes cluster and automating
their operation
2. All operational aspects of devtron platform including maintenance, upgrades,
automation.
3. Providing kubernetes expertise to facilitate smooth and fast customer onboarding on
devtron platform
Responsibilities:
1. Manage devtron platform on multiple kubernetes clusters
2. Designing and embedding industry best practices for online services including disaster
recovery, business continuity, monitoring/alerting, and service health measurement
3. Providing operational support for day to day activities involving the deployment of
services
4. Identify opportunities for improving the security, reliability, and scalability of the platform
5. Facilitate smooth and fast customer onboarding on devtron platform
6. Drive customer engagement
Requirements:
● Bachelor's Degree in Computer Science or a related field.
● 2+ years working as a devops engineer
● Proficient in 1 or more programming languages (e.g. Python, Go, Ruby).
● Familiar with shell scripts, Linux commands, network fundamentals
● Understanding of large scale distributed systems
● Basic understanding of cloud computing (AWS/GCP/Azure)
Preferred Qualifications:
● Great analytical and interpersonal skills
● Passion for creating efficient, reliable, reusable programs/scripts.
● Excited about technology, have a strong interest in learning about and playing with the
latest technologies and doing POC.
● Strong customer focus, ownership, urgency and drive.
● Knowledge and experience with cloud native tools like prometheus, kubernetes, docker,
grafana.
Job Description :
- The engineer should be highly motivated, able to work independently, work and guide other engineers within and outside the team.
- The engineer should possess varied software skills in Shell scripting, Linux, Oracle database, WebLogic, GIT, ANT, Hudson, Jenkins, Docker and Maven
- Work is super fun, non-routine, and, challenging, involving the application of advanced skills in area of specialization.
Key responsibilities :
- Design, develop, troubleshoot and debug software programs for Hudson monitoring, installing Cloud Software, Infra and cloud application usage monitoring.
Required Knowledge :
- Source Configuration management, Docker, Puppet, Ansible, Application and Server Monitoring, AWS, Database Administration, Kubernetes, Log monitoring, CI/CD, Design and implement build, deployment, and configuration management, Improve infrastructure development, Scripting language.
- Good written and verbal communication skills
Qualification :
Education and Experience : Bachelors/Masters in Computer Science
Open for 24- 7 shifts








