
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Similar jobs
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
We are looking for two Senior DevOps Engineer to join our Mumbai-based infrastructure team for a critical on-premises deployment project. This role is focused on transforming manual, legacy deployment practices into structured, secure, and compliant processes within a Windows-first, latency-sensitive environment.
The successful candidate will drive the creation of SOPs, deployment pipelines (without containerization), and a staging environment to support a hybrid stack of ASP.NET MVC (.NET), MS SQL Server (replication mode), and Java microservices with MySQL. This position requires on-site presence in Mumbai due to regulatory and infrastructure constraints and will play a key role in ensuring compliance with SEBI, RBI, PFMI, and IOSCO standards.
Key responsibility would be to lead deployment modernization efforts in a secure, on-premises environment based in Mumbai. The role involves working with legacy Windows infrastructure, ASP.NET MVC apps, MS SQL replication, and manual deployment processes. No containerization or CI/CD tools are in place, so we’re looking for someone who can establish automation and structure from the ground up.
Mandatory: On-site availability in Mumbai, strong experience with manual Windows-based deployments, regulatory compliance awareness (SEBI/RBI/PFMI).
Duration: 3-6 months | Immediate start
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a Cloud Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and
the test results.
What you will do
- End to End Complete RHOSP Deployment (Under and Overcloud) considered as a NFVi Deployment
- Installing Red Hat’s OpenStack technology using the OSP-director
- Deploying a Red Hat OpenStack Platform based on Red Hat's reference architecture
- Deploying managed hosts with required OpenStack parameters
- Deploying Three (3) Node Highly available (HA) Controller Hosts using Pacemaker and HAP Roxy
- Deploying all the supplied compute hosts that will be hosting multiple VNFs (SRIOV & DPDK)
- Implementing CEPH
- Integrate Software-defined storage (CEPH) with RHOSP, Red Hat OpenStack Platform Operation Tools as
per Industry standard & best practices
- Detailed Network configuration and implementation using Neutron networking - (VXLAN) network type
and Modular Layer 2 (ML2) Open Switch plugin
- Integrating Monitoring Solution with RHOSP
- Design and deployment of common Alarm and Performance Management solution
- Red Hat OpenStack management & monitoring
- VM Alarm and Performance management
- Cloud Management Platform will be configured for day-to-day operational tools to measure uponCPU/Mem/Network utilization etc. from VM level
- Baseline Security Standard) & VA (Vulnerability Assessment) for RHOSP.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimize hardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process
Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits
discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability
status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other
characteristic protected by applicable central, state or local laws.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
As DevOps Engineer Consultant you will be responsible for Continuous Integration, Continuous Development,
Continuous Delivery with a strong understanding of Business-Driven software integration and delivery approach, you will
be reporting into the Technical Lead.
Responsibilities & Duties
• Ideate and create CI and CD process and documentation for same.
• Ideate and create and Code Maintenance using Visual SVN/Jenkins.
• Design and implement new learning tools or knowledge
Job requirements:
• Should be able to research, design Code Maintenance Process from scratch.
• Should be able to research, design Continuous Integration Process from scratch.
• Should be able to research, design Continuous Development Process from scratch.
• Should be able to research, design Continuous Delivery Process from scratch.
• Should be worked on Install Shield for creating Instable.
• In-depth understanding of principles and best practices of Software Configuration Management (SCM) in Agile,
SCRUM and Waterfall methodologies.
• Experienced in Windows, Linux environment. Good knowledge and understanding of database and application
servers’ administration in a global production environment.
• Should have good understand and Knowledge on Windows and Linux Server Deployment
• Should have good understand and Knowledge on application hosting on Windows IIS
• Experienced in Visual SVN, Gitlab CI and Jenkins for CI and for End-to-End automation for all build and CD.
Mostly with product developed using Dot net technology.
• Experienced in working with version control systems like GIT and used Source code management client tools like
Git Bash, GitHub, Git Lab.
• Experience in using MAVEN/ANT/Bamboo as build tools for the building of deployable artifacts.
• Knowledge of using Routed Protocols: FTP, SFTP, SSH, HTTP, HTTPS and Connect directly.
• Experienced in deploying Database Changes to Oracle, db2, MSSQL and MYSQL databases.
• Having work experience in support of multi-platform like Windows, UNIX, Linux, Ubuntu.
• Managed multiple environments for both production and non-production where primary objectives included
automation, build out, integration and cost control.
• Expertise in trouble shooting the problems generated while building, deploying and production support.
• Good understanding of creating and managing the various development and build platforms and deployment
strategies.
• Excellent Knowledge of Application Lifecycle Management, Change & Release Management and ITIL process
• Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing,
implementing and Post-production analysis of the projects.
• Good interaction with developers, managers, and team members to coordinate job tasks and strong
commitment to work.
• Documented daily meetings, build reports, release notes and many other day-to-day documentation and status
reports.
• Excellent communicative, interpersonal, intuitive and analytic and leadership skills with teamwork work
efficiently in both independent and teamwork environments.
• Enjoy working on all types of planned and unplanned issues/tasks.
• Implementing gitlab CI, gitlab, docker, maven ect.
• Should have knowledge on docker container which can be utilised in deployment process..
• Good Interpersonal Skills, team-working attitude, takes initiatives and very proactive in solving problems and
providing best solutions.
• Integrating various Version control tools, build tools, deployment methodologies (scripting) into Jenkins or (any
other tool), create an end to end orchestration build cycles.
• Troubleshoot build issues, performance and generating metrics on master's performance along with jobs usage.
• Design develop build and packaging tools for continuous integration build and reporting. Automate the build
and release cycles.
• Coordinate all build and release activities, ensure release processes is well documented, source control
repositories including branching and tagging.
• Maintain product release process, including generating and delivering release packages, generate various
metrics for tracking issues against releases and the means of tracking compatibility among products.
• Maintained and managed cloud & test environments and automation for QA, Product Management and Product
Support
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East. This is an excellent opportunity to join ITP’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description:
Must-Have’s:

We are looking for a System Engineer who can manage requirements and data management in Rational Doors and Siemens Polarion. You will be part of a global development team with resources in China, Sweden and the US.
Responsibilities and tasks
- Import of requirement specifications to DOORS module
- Create module structure according to written specification (e-mail, word, etc)
- Format: reqif, word, excel, pdf, csv
- Make adjustments to data required to be able to import to tool
- Review that the result is readable and possible to work with
- Import of information to new or existing modules in DOORS
- Feedback of Compliance status from an excel compliance matrix to a module in DOORS
- Import requirements from one module to another based on baseline/filter…
- Import lists of items: Test cases, documents, etc in excel or csv to a module
- Provide guidance on format to information holder at client
- Link information/attribute data from one module to others
- Status, test results, comment
- Link requirements according to information from the client in any given format
- Export data and reports
- Assemble report based on data from one or several modules according to filters/baseline/written requests in any given format
- Export statistics from data in DOORS modules
- Create filters in DOORS modules
Note: Polarion activities same as DOORS activities, but process, results and structure may vary
Requirements – Must list (short, and real must, no order)
- =>10 years of overall experience in Automotive Industry
- Having requirement management experience in the automotive industry.
- =>3 years of experience in Rational Doors as user
- Knowledge in Siemens Polarion, working knowledge is a plus
- Experience in offshore delivery for more than 7 years
- Able to lead a team of 3 to 5 people and manage temporary additions to team
- Having working knowledge in ASPICE and handling requirements according to ASPICE L2
- Experience in setting up offshore delivery that best fits the expectations of the customer
- Experience in setting up quality processes and ways of working
- Experience in metrics management – propose, capture and share metrics with internal/ external stakeholders
- Good Communication skills in English
Requirements - Good to have list, strictly sorted in falling priority order
- Experience in DevOps framework of delivery
- Interest in learning new languages
- Handling requirements according to ASPICE L3
- Willingness in travel, travel to Sweden may be needed (approx. 1-2 per year)
Soft skills
- Candidate must be driving and proactive person, able to work with minimum supervision and will be asked to give example situations incoming interviews.
- Good team player with attention to detail, self-disciplined, able to manage their own time and workload, proactive and motivated.
- Strong sense of responsibility and commitment, innovative thinking.
- Strong Understanding of Linux administration
- Good understanding of using Python or Shell scripting (Automation mindset is key in this role)
- Hands on experience with Implementation of CI/CD Processes
Experience working with one of these cloud platforms (AWS, Azure or Google Cloud) - Experience working with configuration management tools such as Ansible, Chef
Experience in Source Control Management including SVN, Bitbucket and GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus
Troubleshoot and triage development and Production issues - Understanding of micro-services is a plus
Roles & Responsibilities
- Implementation and troubleshooting on Linux technologies related to OS, Virtualization, server and storage, backup, scripting / automation, Performance fine tuning
- LAMP stack skills
- Monitoring tools deployment / management (Nagios, New Relic, Zabbix, etc)
- Infra provisioning using Infra as code mindset
- CI/CD automation






