We are looking for an experienced software engineer with a strong background in DevOps and handling traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous Integration/Delivery/
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
2+ years of experience working on distributed systems and shipping high-quality product features on schedule
Intimate knowledge of the whole web stack (Front end, APIs, database, networks etc.)
Ability to build highly scalable, robust, and fault-tolerant services and stay up-to-date with the latest architectural trends
Experience with container based deployment, microservices, in-memory caches, relational databases, key-value stores
Hands-on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, RDS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch)

About Snapwiz
About
Connect with the team
Similar jobs

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
- Development/Technical support experience in preferably DevOps.
- Looking for an engineer to be part of GitHub Actions support. Experience with CI/CD tools like Bamboo, Harness, Ansible, Salt Scripting.
- Hands-on expertise with GitHub Actions and CICD Tools like Bamboo, Harness, CI/CD Pipeline stages, Build Tools, SonarQube, Artifactory, Nuget, Proget Veracode, LaunchDarkly, GitHub/Bitbucket repos, Monitoring tools.
- Handelling Xmatters,Techlines,Incidents
- Strong Scripting skills (PowerShell, Python, Bash/Shell Scripting) for Implementing automation scripts and Tools to streamline administrative tasks and improve efficiency.
- An Atlassian Tools Administrator is responsible for managing and maintaining Atlassian products such as Jira, Confluence, Bitbucket, and Bamboo.
- Expertise in Bitbucket, GitHub for version control and collaboration global level.
- Good experience on Linux/Windows systems activities, Databases.
- Aware of SLA and Error concepts and their implementations; provide support and participate in Incident management & Jira Stories. Continuously Monitoring system performance and availability, and responding to incidents promptly to minimize downtime.
- Well-versed with Observability tool as Splunk for Monitoring, alerting and logging solutions to identify and address potential issues, especially in infrastructure.
- Expert with Troubleshooting production issues and bugs. Identifying and resolving issues in production environments.
- Experience in providing 24x5 support.
- GitHub Actions
- Atlassian Tools (Bamboo, Bitbucket, Jira, Confluence)
- Build Tools (Maven, Gradle, MS Build, NodeJS)
- SonarQube, Veracode.
- Nexus, JFrog, Nuget, Proget
- Harness
- Salt Services, Ansible
- PowerShell, Shell scripting
- Splunk
- Linux, Windows
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Position: Senior DevOps Engineer (Azure Cloud Infra & Application deployments)
Location:
Hyderabad
Hiring a Senior
DevOps engineer having 2 to 5 years of experience.
Primary Responsibilities
Strong Programming experience in PowerShell and batch
scripts.
Strong expertise in Azure DevOps, GitLab, CI/CD, Jenkins and
Git Actions, Azure Infrastructure.
Strong
experience in configuring infrastructure and Deployments application in
Kubernetes, docker & Helm charts, App Services, Server less, SQL database, Cloud
services and Container deployment.
Continues
Integration, deployment, and version control (Git/ADO).
Strong experience in managing and configuring RBAC, managed identity and
security best practices for cloud environments.
Strong verbal and written communication skills.
Experience with agile development process.
·
Good analytical skills
Additional Responsibility.
Familiar with
various design and architecture patterns.
Work with modern frameworks and design patterns.
Experience with cloud applications Azure/AWS. Should have experience in developing solutions and plugins and should have used XRM Toolbox/ToolKit.
Exp in Customer Portal and Fetchxml and Power Apps and Power Automate is good to have.
-
Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.
Job Description
We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!
What do we expect from this role?
- This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
- Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
- Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
- Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
- Troubleshoots basic software or DevOps stack issues
You would be great at this job, if you have below mentioned competencies
- Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
- 5+ years of relevant work experience
- https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
- Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
- Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
- Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
- Experience with the design and implementation of big data backup/recovery solutions.
- Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
- Working knowledge in Python is a plus
- Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
- Strong interpersonal and communication skills
- Proven ability to complete projects according to outlined scope and timeline
- Willingness to travel within India and internationally whenever required
- Demonstrated leadership qualities in past roles
More about Pixuate:
Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.
We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.
Rewards & Recognitions:
- Winner of Elevate by Startup Karnataka (https://pixuate.ai/thermal-analytics/">https://pixuate.ai/thermal-analytics/).
- Winner of Manufacturing Innovation Challenge in the 2nd edition of Fusion 4.0’s MIC2020 organized by the NASSCOM Centre of Excellence in IoT & AI in 2021
- Winner of SASACT program organized by MEITY in 2021
Why join us?
You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.
Where to find us?
Website: http://pixuate.com/">http://pixuate.com/
Linked in: https://www.linkedin.com/company/pixuate-ai
Work from Office – BengaluruPlace of Work:
About Us
Zupee is India’s fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.
Know more about our recent funding: https://bit.ly/3AHmSL3
Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.
Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram
Core Responsibilities:
- Handling all Devops activities
- Engage with development teams to document and implement best practices for our existing and new products
- Ensure reliability of services by making systems and applications stable
- Implement DevOps technologies and processes i.e. containerization, CI/CD, infrastructure as code, metrics, monitoring, etc.
- Good understanding of Terraform (or similar ‘Infrastructure as Code’ technologies)
- Good understanding of modern CI/CD methods and approaches
- Troubleshoot and resolve issues related to application development, deployment, and operations
- The expertise of AWS ecosystem (EC2, ECS, S3, VPC, ALB, SG)
- Experience in monitoring production systems and being proactive to ensure that outages are limited
- Develop and contribute to several infrastructure improvement projects around performance, security, autoscaling, and cost-optimization
What are we looking for:
- AWS cloud services – Compute, Networking, Security and Database services
- Strong with Linux fundamentals
- Experience with Docker & Kubernetes in a Production Environment
- Working knowledge of Terraform/Ansible
- Working Knowledge of Prometheus and Grafana or any other monitoring system
- Good at scripting with Bash/Python/Shell
- Experience with CI/CD pipeline preferably AWS DevOps
- Strong troubleshooting and problem-solving skills
- A top-notch DevOps Engineer will demonstrate excellent leadership skills and the capacity to mentor subordinates
- Good communication and collaboration skills
- Openness to learning new tools and technologies
- Startup Experience would be a plus
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform

- Work with developers to build out CI/CD pipelines, enable self-service build tools and reusable deployment jobs. Find, explore, and advocate for new technologies for enterprise use.
- Automate the provisioning of environments
- Promote new DevOps tools to simplify the build process and entire Continuous Delivery.
- Manage a Continuous Integration and Deployment environment.
- Coordinate and scale the evolving build and cloud deployment systems across all product development teams.
- Work independently, with, and across teams. Establishing smooth running. environments are paramount to your success, and happiness
- Encourage innovation, implementation of cutting-edge technologies, inclusion, outside-of-the[1]box thinking, teamwork, self-organization, and diversity.
Technical Skills
- Experience with AWS multi-region/multi-AZ deployed systems, auto-scaling of EC2 instances, CloudFormation, ELBs, VPCs, CloudWatch, SNS, SQS, S3, Route53, RDS, IAM roles, security groups, cloud watch
- Experience in Data Visualization and Monitoring tools such as Grafana and Kibana
- Experienced in Build and CI/CD/CT technologies like GitHub, Chef, Artifactory, Hudson/Jenkins
- Experience with log collection, filter creation, and analysis, builds, and performance monitoring/tuning of infrastructure.
- Automate the provisioning of environments pulling strings with Puppet, cooking up some recipes with Chef, or through Ansible, and the deployment of those environments using containers, like Docker or Rocket: (have at least some configuration management tool through some version control).
Qualifications:
- B.E/ B.Tech/ M.C.A in Computer Science, Electronics and Communication Engineering, Electronics and Electrical Engineering.
- Minimum 60% in Graduation and Post-Graduation.
- Good verbal and written communication skills
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology

