You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years

Similar jobs
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Required Qualifications
● Experience:
○ 5+ years of hands-on experience as a DevOps Engineer or similar role, with
proven expertise in building and customizing Helm charts from scratch (not just
using pre-existing ones).
○ Demonstrated ability to design and whiteboard DevOps pipelines, including
CI/CD workflows for microservices applications.
○ Experience packaging and deploying applications with stateful dependencies
(e.g., databases, persistent storage) in varied environments: on-prem (air-gapped
and non-air-gapped), single-tenant cloud, multi-tenant cloud, and developer trials.
○ Proficiency in managing deployments in Kubernetes clusters, including offline
installations, upgrades via Helm, and adaptations for client restrictions (e.g., no
additional tools or VMs).
○ Track record of handling client interactions, such as asking probing questions
about infrastructure (e.g., OS versions, storage solutions, network restrictions)
and explaining technical concepts clearly.
● Technical Skills:
○ Strong knowledge of Helm syntax and functionalities (e.g., Go templating, hooks,
subcharts, dependency management).
○ Expertise in containerization with Docker, including image management
(save/load, registries like Harbor or ECR).
○ Familiarity with CI/CD tools such as Jenkins, ArgoCD, GitHub Actions, and
GitOps for automated and manual deployments.
○ Understanding of storage solutions for on-prem and cloud, including object/file
storage (e.g., MinIO, Ceph, NFS, cloud-native like S3/EBS).
○ In-depth knowledge of Kubernetes concepts: StatefulSets, PersistentVolumes,
namespaces, HPA, liveness/readiness probes, network policies, and RBAC.
○ Solid grasp of cloud networking: VPCs (definition, boundaries, virtualization via
SDN, differences from private clouds), bare metal vs. virtual machines
(advantages like resource efficiency, flexibility, and scalability).
○ Ability to work in air-gapped environments, preparing offline artifacts and
ensuring self-contained deployment
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
JD:
Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
Be a customer advocate with obsession for excellence delivering measurable success for Intuitive’s customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies, and risk management strategies
Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
Experience with application discovery prefera bly with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
Hands-on Experience with application build/release processes CI/CD pipelines
Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Additional Requirements:
Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
Assist Sales and Marketing team in creating sales and marketing collateral
Write whitepapers and technology blogs to be published on social media and Intuitive website
Create case studies for projects successfully executed by Intuitive delivery team
Conduct sales enablement sessions to coach sales team on new offerings
Flexibility with work hours supporting customer’s requirement and collaboration with global delivery teams
Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
Strong passion for modern technology exploration and development
Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
A network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Job Summary
Creates, modifies, and maintains software applications individually or as part of a team. Provides technical leadership on a team, including training and mentoring of other team members. Provides technology and architecture direction for the team, department, and organization.
Essential Duties & Responsibilities
- Develops software applications and supporting infrastructure using established coding standards and methodologies
- Sets example for software quality through multiple levels of automated tests, including but not limited to unit, API, End to End, and load.
- Self-starter and self-organized - able to work without supervision
- Develops tooling, test harnesses and innovative solutions to understand and monitor the quality of the product
- Develops infrastructure as code to reliably deploy applications on demand or through automation
- Understands cloud managed services and builds scalable and secure applications using them
- Creates proof of concepts for new ideas that answer key questions of feasibility, desirability, and viability
- Work with other technical leaders to establish coding standards, development best practices and technology direction
- Performs thorough code reviews that promote better understanding throughout the team
- Work with architects, designers, business analysts and others to design and implement high quality software solutions
- Builds intuitive user interfaces with the end user persona in mind using front end frameworks and styling
- Assist product owners in backlog grooming, story breakdown and story estimation
- Collaborate and communicate effectively with team members and other stakeholders throughout the organization
- Document software changes for use by other engineers, quality assurance and documentation specialists
- Master the technologies, languages, and practices used by the team and project assigned
- Train others in the technologies, languages, and practices used by the team
- Trouble shoot, instrument and debug existing software resolving root causes of defective behavior
- Guide the team in setting up the infrastructure in the cloud.
- Setup the security protocols for the cloud infrastructure
- Works with the team in setting up the data hub in the cloud
- Create dashboards for the visibility of the various interactions between the cloud services
- Other duties as assigned
Experience
Education
- BA/BS in Computer Science, a related field or equivalent work experience
Minimum Qualifications
- Mastered advanced programming concepts, including object oriented programming
- Mastered technologies and tools utilized by team and project assigned
- Able to train others on general programming concepts and specific technologies
- Minimum 8 years’ experience developing software applications
Skills/Knowledge
- Must be expert in advanced programming skills and database technology
- Must be expert in at least one technology and/or language and proficient in multiple technologies and languages:
- (Specific languages needed will vary based on development department or project)
- .Net Core, C#, Java, SQL, JavaScript, Typescript, Python
- Additional desired skills:
- Single-Page Applications, Angular (v9), Ivy, RXJS, NGRX, HTML5, CSS/SASS, Web Components, Atomic Design
- Test First approach, Test Driven Development (TDD), Automated testing (Protractor, Jasmine), Newman Postman, artillery.io
- Microservices, Terraform, Jenkins, Jupyter Notebook, Docker, NPM, Yarn, Nuget, NodeJS, Git/Gerrit, LaunchDarkly
- Amazon Web Services (AWS), Lambda, S3, Cognito, Step Functions, SQS, IAM, Cloudwatch, Elasticache
- Database Design, Optimization, Replication, Partitioning/Sharding, NoSQL, PostgreSQL, MongoDB, DynamoDB, Elastic Search, PySpark, Kafka
- Agile, Scrum, Kanban, DevSecOps
- Strong problem-solving skills
- Outstanding communications and interpersonal skills
- Strong organizational skills and ability to multi-task
- Ability to track software issues to successful resolution
- Ability to work in a collaborative fast paced environment
- Setting up complex AWS data storage hub
- Well versed in setting up infrastructure security in the interactions between the planned components
- Experienced in setting up dashboards for analyzing the various operations in the AWS infra setup.
- Ability to learn new development language quickly and apply that knowledge effectively









