Cutshort logo
IT Company logo
Software Architect/CTO
IT Company
Software Architect/CTO
IT Company's logo

Software Architect/CTO

at IT Company

Agency job
10 - 15 yrs
₹10L - ₹18L / yr
Remote, Pune
Skills
Terraform
Windows Azure
Microsoft Windows Azure
Azure
Cloud
  • JD: • 10+ years of overall industry experience
    • 5+ years of cloud experience
    • 2+ years of architect experience
    • Varied background preferred between systems and development
    o Experience working with applications, not pure infra experience
    • Azure experience – strong background using Azure for application migrations
    • Terraform experience – should mention automation technologies in job experience
    • Hands on experience delivering in the cloud
    • Must have job experience designing solutions for customers
    • IaaS Cloud architect
    workload migrations to AWS and/or Azure
    • Security architecture considerations experience
    • CI/CD experience
    • Proven applications migration track of record.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

Technology Industry
Technology Industry
Agency job
via Peak Hire Solutions by Dharati Thakkar
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹5L / yr
DevOps
Windows Azure
CI/CD
MySQL
skill iconPython
+12 more

JOB DETAILS:

* Job Title: DevOps Engineer (Azure)

* Industry: Technology

* Salary: Best in Industry

* Experience: 2-5 years

* Location: Bengaluru, Koramangala

Review Criteria

  • Strong Azure DevOps Engineer Profiles.
  • Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
  • Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
  • Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
  • Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
  • Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

 

Preferred

  • Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
  • Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
  • Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline

 

Role & Responsibilities

  • Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
  • Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
  • Implement Git branching strategies and automate release workflows.
  • Develop scripts using Bash, Python, or PowerShell for DevOps automation.
  • Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
  • Collaborate with dev and QA teams in an Agile/Scrum environment.
  • Maintain documentation, runbooks, and participate in root cause analysis.

 

Ideal Candidate

  • 2–5 years of experience as an Azure DevOps Engineer.
  • Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
  • Experience with Microsoft Azure (OCI/AWS exposure is a plus).
  • Working knowledge of SQL Server, PostgreSQL, or Oracle.
  • Good scripting, troubleshooting, and communication skills.
  • Bonus: Docker, Kubernetes, Terraform, Ansible experience.
  • Comfortable with WFO (Koramangala, Bangalore).


Read more
Glan management Consultancy
Gurugram
7 - 15 yrs
₹20L - ₹40L / yr
DevOps
aws
Terraform
skill iconMongoDB

Job Title: AWS Devops Engineer – Manager Business solutions

Location: Gurgaon, India

Experience Required: 8-12 years

Industry: IT


 

We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.

 

Key Deliverables (Essential functions & Responsibilities of the Job):

 

·         Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.

·         Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.

·         Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.

·         Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.

·         Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.

·         Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.

·         Work closely with development teams to improve application reliability, scalability, and performance.

·         Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.

·         Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.

·         Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.

 

Knowledge Skills and Abilities:

·         7+ years of hands-on AWS DevOps experience, especially with middleware services.

·         Strong expertise in MongoDB Atlas or other cloud MongoDB services.

·         Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.

·         Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.

·         Excellent scripting skills in Python, Bash, or PowerShell.

·         Experience in containerization and orchestration: Docker, EKS, ECS.

·         Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.

·         Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.

·         Ability to solve complex problems and thrive in a fast-paced environment.

 

Preferred Qualifications

·         AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.

·         MongoDB Certified DBA or Developer.

·         Experience with serverless services like AWS Lambda, Step Functions.

·         Exposure to multi-cloud or hybrid cloud environments.




Mail updated resume with current salary-

Email: etalenthire[at]gmail[dot]com

Satish; 88O 27 49 743

Read more
TechBiz Global
Veronika Novik
Posted by Veronika Novik
Remote only
3 - 5 yrs
€18K - €19.2K / yr
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconPython
Bash
Terraform

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

Job requirements

Key Responsibilities:

  • Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
  • Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
  • Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
  • Communicate clearly and fluently in English with customers and internal teams.
  • Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
  • Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.

Shift Details:

  • Engineers rotate shifts, typically working 4–5 shifts per week.
  • Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
  • Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.

Qualifications:

  • 2–5 years of experience in DevOps or cloud support roles.
  • Strong familiarity with AWS and/or Azure cloud environments.
  • Experience with CI/CD tools such as GitHub Actions or Jenkins.
  • Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
  • Basic scripting skills in Bash, Python, or comparable languages.
  • Excellent communication skills in English.
  • Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
  • Prior experience in a shift-based support environment is preferred.

What We Offer:

  • Remote work opportunity — work from anywhere in India with a stable internet connection.
  • Comprehensive training program including:
  • Shadowing existing processes to gain hands-on experience.
  • Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.
Read more
porter
Agency job
via UPhill HR by Ingit Pandey
Bengaluru (Bangalore)
4 - 6 yrs
₹24L - ₹34L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
CI/CD
+4 more

Job Title: DevOps SDE llI

Job Summary

Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.

Responsibilities and Duties

In this role, you will

  • Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
  • Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
  • Build out and improve our observability infrastructure.
  • Set up automated auditing processes and improve our applications' security posture.
  • Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
  • Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
  • Continuously improve our build pipelines, automated deployments, and automated testing.
  • Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.

Qualifications and Skills

Hard requirements for this role:

  • 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
  • Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
  • Experience in working with infrastructure as code (Terraform/CloudFormation).
  • Linux and networking administration experience.
  • Strong Linux Shell scripting experience.
  • Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
  • Experience with configuration management tools like Ansible and Packer.
  • Experience with container orchestration tools. (Kubernetes/ECS).
  • Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
  • AWS SysOps administrator + Developer certification or equivalent knowledge

Good to have:

  • Experience working with ELK stack.
  • Experience supporting JVM applications.
  • Experience working with APM tools is good to have. (We use datadog)
  • Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
  • Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
  • Experience with JIRA/Jira help desk.

 

Read more
Bootlabs Technologies Private Limited
at Bootlabs Technologies Private Limited
2 candid answers
1 recruiter
Sahana Pal
Posted by Sahana Pal
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
Adfolks
at Adfolks
1 recruiter
Jintu Antony
Posted by Jintu Antony
India
5 - 8 yrs
₹10L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
openshift
Windows Azure
+2 more
Job Role: Sr. Cloud Architect (Openshift)
Must Haves: Openshift, Kubernetes
Location: Currently in India (also willing to relocate to UAE)
Preferred an immediate joiner with minimum 2 weeks to 1 month of Notice Period. 
Add on skills: Terraform, Gitops, Jenkins, ELK
Read more
B2B2C tech Web3 startup
B2B2C tech Web3 startup
Agency job
via Merito by Gaurav Bhosle
Bangalore
5 - 10 yrs
₹30L - ₹50L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Amazon EC2
Amazon S3
Amazon RDS
+2 more
Hi

 
 
Our Client is a social commerce - web3 startup foundedby founders - IITB Graduates who are experienced in retail,ecommerce and fintech

We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).

Location: Bangalore

Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime

Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions

Regards
Team Merito
Read more
An IT based company in Pune
An IT based company in Pune
Agency job
via WEN Women Entrepreneur Network by Kanika Vaswani
Pune
2 - 6 yrs
₹6L - ₹11L / yr
DevOps
skill iconDocker
Terraform
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+5 more

We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.

 

 

Job Description:

  • Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
  • Experience interacting with customer.
  • Excellent communication.
  • Hands-on in creating and managing Jenkins job, Groovy scripting.
  • Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
  • Experience in Maven.
  • Experience in scripting languages like Bash, Powershell, Python.
  • Experience in automation tools like Terraform, Ansible, Chef, Puppet.
  • Excellent troubleshooting skills.
  • Experience in Docker and Kuberneties with creating docker files.
  • Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Read more
RenovITe Technologies
at RenovITe Technologies
1 recruiter
Pranjali Sinha
Posted by Pranjali Sinha
Remote only
6 - 15 yrs
₹4L - ₹20L / yr
skill iconKubernetes
Terraform
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconJenkins
+1 more
  • Strong communication skills (written and verbal)
  • Responsive, reliable and results oriented with the ability to execute on aggressive plans
  • A background in software development, with experience of working in an agile product software development environment
  • An understanding of modern deployment tools (Git, Bitbucket, Jenkins, etc.), workflow tools (Jira, Confluence) and practices (Agile (SCRUM), DevOps, etc.)
  • Expert level experience with AWS tools, technologies and APIs associated with it - IAM, Cloud-Formation, Cloud Watch, AMIs, SNS, EC2, EBS, EFS, S3, RDS, VPC, ELB, IAM, Route 53, Security Groups, Lambda, VPC etc.
  • Hands on experience with Kubernetes (EKS preferred)
  • Strong DevOps skills across CI/CD and configuration management using Jenkins, Ansible, Terraform, Docker.
  • Experience provisioning and spinning up AWS Clusters using Terraform, Helm, Helm Charts
  • Ability to work across multiple projects simultaneously
  • Ability to manage and work with teams and customers across the globe

 

Read more
YourHRfolks
at YourHRfolks
6 recruiters
Pranit Visiyait
Posted by Pranit Visiyait
Remote, Jaipur
3 - 8 yrs
₹6L - ₹16L / yr
DevOps
skill iconDocker
skill iconJenkins
skill iconKubernetes
Terraform
+6 more

Job Location: Jaipur

Experience Required: Minimum 3 years

About the role:

As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here. 

Responsibilities:

  • Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
  • Ensuring availability, performance, security, and scalability of AWS production systems
  • Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment. 
  • Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
  • Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
  • 24x7 in shifts on call for Level 2 and higher escalations
  • Respond to incidents and write blameless RCA’s/postmortems
  • Implement and practice proper security controls and processes
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on platform.

Must have:  

  • Minimum 3 Years of Experience in DevOps.  
  • BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
  • Strong inter-personal skills.
  • Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
  • Must have experience in Docker, Kubernetes, Amazon ECS or  Mesos
  • Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
  • Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
  • Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.  
  • In-depth knowledge of the Linux operating system and administration. 
  • Production experience with a major cloud provider such Amazon AWS.
  • Knowledge of web server technologies such as Nginx or Apache. 
  • Knowledge of Redis, Memcache, or one of the many in-memory data stores.
  • Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5. 
  • Comfortable with large-scale, highly-available distributed systems.

Good to have:  

  • Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
  • Production experience with Hashicorp products such as Vault or Consul
  • Expertise in designing, analyzing troubleshooting large-scale distributed systems.
  • Experience in an PCI environment
  • Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
  • Experience maintaining and scaling database applications
  • Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc. 
  • Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
  • Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs. 
  • Experience in Kafka, RabbitMQ or any messaging bus.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos