
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.

Similar jobs
About Us:
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Are you the One?
As a Senior DevOps Engineer in our Platform Engineering team, you’ll play a pivotal role in evolving the foundational cloud infrastructure that powers our API-first BaaS platform. You will design, build, and operate secure, multi-tenant, highly available infrastructure systems optimized for rapid developer onboarding, API lifecycle management, observability, and compliance enforcement.
This role is deeply technical and cross functional. You’ll work closely with Backend Engineers, SREs, Product teams, and Infosec to deliver a seamless and secure platform experience.
You responsibilities will include:
- Architect, provision, and operate infrastructure that supports tens of thousands of API transactions per second across multi-cloud environments.
- Build multi-tenant Kubernetes environments that serve as the backbone for API deployments, internal developer platforms, and CI/CD automation.
- Implement scalable service mesh, ingress, and routing layers to support API versioning, rate Enable self-service infrastructure provisioning, deployment pipelines, and observability tooling through developer portals and golden paths.
- Improve time-to-first-API and time-to-production for engineering teams by abstracting infrastructure complexity via platform APIs.
- Implement zero-trust networking, encryption at rest and in transit, secrets management, and fine-grained IAM policies for platform services.
- Collaborate with security and compliance teams to embed policy-as-code, audit logging, and monitoring to meet regulatory standards (e.g., PCI DSS, MAS-TRM).
- Ensure that infrastructure changes are governed through GitOps and change management workflows.
- Build and scale telemetry pipelines (metrics, traces, logs) for granular insights into API health, latency, and developer usage.
- Enable real-time alerting, SLO-based monitoring, and automated remediation for critical platform services.
- Lead incident response and postmortem analysis, continuously improving system resiliency and response times.
Requirements:
- 5+ years of hands-on experience designing and operating production-grade cloud infrastructure and Kubernetes workloads.
- Deep understanding of cloud-native architectures supporting large-scale, low-latency API services (e.g., OpenAPI, REST, gRPC).
- Strong command of Kubernetes internals, including operators, Helm, RBAC, admission controllers, CNI, CSI, and NetworkPolicies.
- Proficient in cloud security practices, identity federation (OIDC, JWT), API gateway controls, TLS management, and key rotation.
- Proficient in Python, Go, or a similar language for scripting, tooling, and automation.
- Familiar with cloud-native CI/CD tooling (ArgoCD, Tekton, FluxCD, Jenkins) and Infrastructure-as-Code(Terraform, Pulumi).
Good to have:
- Experience with API gateway platforms (Kong, Apigee, AWS API Gateway) in production setups.
- Background in multi-region deployments, failover design, and cross-zone service routing.
- Experience with developer portals, internal developer platforms (IDPs), and API usage analytics.
- Familiarity with banking-grade compliance standards, SOC2, PCI DSS, ISO 27001.
- Understanding of data protection laws (e.g., PDPA, GDPR) as applied to cloud infrastructure.
MatchMove Culture:
- We cultivate a dynamic, innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We focus on employee development, supporting continuous learning and growth through training programs, on-the-job learning, and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Our team spans Asia, embracing diversity and fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to impact people's lives meaningfully.
- Grow with us and shape the future of fintech. Join us and be part of something bigger!
Job Responsibilities:
Section 1 -
- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.
Section 2 -
- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.
Section 3 -
- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.
Section 4 -
- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.
Qualifications: Bachelors of Engineering / MCA Preferably with AWS, Cloud certification
Skills & Competencies
- Linux and Windows servers management and troubleshooting.
- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS
- Kubernetes and containers knowledge
- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)
- Understanding of serverless architecture. - High understanding of Networking concepts
- High understanding of Serverless architecture concept - Managing to monitor and alerting systems
- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs
- Good Project management skills
- Documentation skills
- Backup, and DR understanding
Soft Skills - Project management, Process Documentation
Ideal Candidate:
- AWS certification with between 2-4 years of experience with certification and project execution experience.
- Someone who is interested in building sustainable cloud architecture with automation on AWS.
- Someone who is interested in learning and being challenged on a day-to-day basis.
- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.
- Someone who is curious to analyze and solve complex problems.
- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.
Behavioral Traits
- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.
- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.
- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Responsibilities:
- Writing and maintaining the automation for deployments across various cloud (AWS/Azure/GCP)
- Bring a passion to stay on top of DevOps trends, experiment, and learn new CI/CD technologies.
- Creating the Architecture Diagrams and documentation for various pieces
- Build tools and automation to improve the system's observability, availability, reliability, performance/latency, monitoring, emergency response
Requirements:
- 3 - 5 years of professional experience as a DevOps / System Engineer.
- Strong knowledge in Systems Administration & troubleshooting skills with Linux.
- Experience with CI/CD best practices and tooling, preferably Jenkins, Circle CI.
- Hands-on experience with Cloud platforms such as AWS/Azure/GCP or private cloud environments.
- Experience and understanding of modern container orchestration, Well-versed with the containerised applications (Docker, Docker-compose, Docker-swarm, Kubernetes).
- Experience in Infrastructure as code development using Terraform.
- Basic Networking knowledge VLAN, Subnet, VPC, Webserver like Nginx, Apache.
- Experience in handling different SQL and NoSQL databases (PostgreSQL, MySQL, Mongo).
- Experience with GIT Version Control Software.
- Proficiency in any programming or scripting language such as Shell Script, Python, Golang.
- Strong interpersonal and communication skills; ability to work in a team environment.
- AWS / Kubernetes Certifications: AWS Certified Solutions Architect / CKA.
- Setup and management of a Kubernetes cluster, including writing Docker files.
- Experience working in and advocating for agile environments.
- Knowledge in Microservice architecture.
Requirements
You will make an ideal candidate if you have:
-
Experience of building a range of Services in a Cloud Service provider
-
Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques
-
Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)
-
Managed an infrastructure that involved 50+ hosts/network
-
3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security
-
Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)
-
Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
-
Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
-
Good leadership and teamwork skills - Works collaboratively in an agile environment
-
Operational effectiveness - delivers solutions that align to approved design patterns and security standards
-
Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Experienced in full automation and configuration management
-
A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change
-
Set and achieved challenging short, medium and long term goals which exceeded the standards in their field
-
Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences
-
Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same
-
A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success
-
A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes
Essential Skills :
-
Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security
-
Demonstrable experience of Linux administration and scripting preferably Red Hat
-
Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools
-
Experience working within an Agile environment
-
Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Server administration (either Linux or Windows)
-
Automation scripting (using scripting languages such as Terraform, Ansible etc.)
-
Ability to quickly acquire new skills and tools
Required Skills :
-
Linux & Windows Server Certification
We are hiring candidates who are looking to work in a cloud environment and ready to learn and adapt to the evolving technologies.
Linux Administrator Roles & Responsibilities:
- 5+ or more years of professional experience with strong working expertise in Agile environments
- Deep knowledge in managing Linux servers.
- Managing Windows servers(Not Mandatory).
- Manage Web servers (Apache, Nginx).
- Manage Application servers.
- Strong background & experience in any one scripting language (Bash, Python)
- Manage firewall rules.
- Perform root cause analysis for production errors.
- Basic administration of MySQL, MSSQL.
- Ready to learn and adapt to business requirements.
- Manage information security controls with best practises and processes.
- Support business requirements beyond working hours.
- Ensuring highest uptimes of the services.
- Monitoring resource usages.
Skills/Requirements
- Bachelor’s Degree or Diploma in Computer Science, Engineering, Software Engineering or a relevant field.
- Experience with Linux-based infrastructures, Linux/Unix administration.
- Knowledge in managing databases such as My SQL, MS SQL.
- Knowledge of scripting languages such as Python, Bash.
- Knowledge in open-source technologies and cloud services like AWS, Azure is a plus. Candidates willing to learn will be preferred.
- Experience in managing web applications.
- Problem-solving attitude.
- 5+ years experience in the IT industry.
We are looking for a self motivated and goal oriented candidate to lead in architecting, developing, deploying, and maintaining first class, highly scalable, highly available SaaS platforms.
This is a very hands-on role. You will have a significant impact on Wenable's success.
Technical Requirements:
8+ years SaaS and Cloud Architecture and Development with frameworks such as:
- AWS, GoogleCloud, Azure, and/or other
- Kafka, RabbitMQ, Redis, MongoDB, Cassandra, ElasticSearch
- Docker, Kubernetes, Helm, Terraform, Mesos, VMs, and/or similar orchestration, scaling, and deployment frameworks
- ProtoBufs, JSON modeling
- CI/CD utilities like Jenkins, CircleCi, etc..
- Log aggregation systems like Graylog or ELK
- Additional development tools typically used in orchestration and automation like Python, Shell, etc...
- Strong security best practices background
- Strong software development a plus
Leadership Requirements:
- Strong written and verbal skills. This role will entail significant coordination both internally and externally.
- Ability to lead projects of blended teams, on/offshore, of various sizes.
- Ability to report to executive and leadership teams.
- Must be data driven, and objective/goal oriented.

