

Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Beng involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Creating and managing CI/ CD pipelines for microservice architectures.
- Creating and managing application configuration.
- Researching and planning architectures and tools for smooth deployments.
- Logging, metrics and alerting management.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in designing CI/ CD pipelines using jenkins. Experience in deployment using Ansible.
- Experience in microservices architecture deployment, Hands-on experience on Docker, Kubernetes, EKS.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Configuration management tools like ansible/chef/puppet.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Proficient in bash scripting, python scripting is an advantage.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
- Proficient in Configuration Management.

Similar jobs
Lead DevSecOps Engineer
Location: Pune, India (In-office) | Experience: 3–5 years | Type: Full-time
Apply here → https://lnk.ink/CLqe2
About FlytBase:
FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.
We're building intelligent autonomy — not just automation — and security is core to that vision.
What You’ll Own
You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.
Expect to:
- Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
- Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
- Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
- Build for Zero Trust security — logs, secrets, audits, access policies
- Lead incident response, postmortems, and playbooks to reduce MTTR
- Automate and secure CI/CD pipelines with SAST, DAST, image hardening
- Script your way out of toil using Python, Bash, or LLM-based agents
- Work alongside dev, platform, and product teams to ship secure, scalable systems
What We’re Looking For:
You’ve probably done a lot of this already:
- 3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
- Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
- Strong in Linux internals, OS hardening, and network security
- Built and owned CI/CD pipelines, IaC, and automated releases
- Written scripts (Python/Bash) that saved your team hours
- Familiar with SOC 2, ISO 27001, threat detection, and compliance work
Bonus if you’ve:
- Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.
What It Means to Be a Flyter
- AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
- Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
- Joy in complexity: Security + infra + scale = your happy place.
- Radical candor: You give and receive sharp feedback early — and grow faster because of it.
- Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
- H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
- Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.
Perks:
▪ Unlimited leave & flexible hours
▪ Top-tier health coverage
▪ Budget for AI tools, courses
▪ International deployments
▪ ESOPs and high-agency team culture
Apply Here- https://lnk.ink/CLqe2
Key Responsibilities:
- Build and Automation: Utilize Gradle for building and automating software projects. Ensure efficient and reliable build processes.
- Scripting: Develop and maintain scripts using Python and Shell scripting to automate tasks and improve workflow efficiency.
- CI/CD Tools: Implement and manage Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools such as Harness, Github Actions, Jenkins, and other relevant technologies. Ensure seamless integration and delivery of code changes.
- Cloud Platforms: Demonstrate proficiency in working with cloud platforms including OpenShift, Azure, and Google Cloud Platform (GCP). Deploy, manage, and monitor applications in cloud environments.
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
Job Description:
We are looking to recruit engineers with zeal to learn cloud solutions using Amazon Web Services (AWS). We\'ll prefer an engineer who is passionate about AWS Cloud technology, passionate about helping customers succeed, passionate about quality and truly enjoys what they do. The qualified candidate for AWS Cloud Engineer position is someone who has a can-do attitude and is an innovative thinker.
- Be a hands on with responsibilities for the installation, configuration, and ongoing management of Linux based solutions on AWS for our clients.
- Responsible for creating and managing Autoscaling EC2 instances using VPCs, Elastic Load Balancers, and other services across multiple availability zones to build resilient, scalable and failsafe cloud solutions.
- Familiarity with other AWS services such as CloudFront, ALB, EC2, RDS, Route 53 etc. desirable.
- Working Knowledge of RDS, Dynamo DB, Guard Duty, WAF, Multi tier architecture.
- Proficient in working on Git, CI CD Pipelined, AWS Devops, Git, Bit Bucket, Ansible.
- Proficient in working on Docker Engine, Containers, Kubernetes .
- Expertise in Migration workload to AWS from different cloud providers
- Should be versatile in problem solving and resolve complex issues ranging from OS and application faults to creatively improving solution design
- Should be ready to work in rotation on a 24x7 schedule, and be available on call at other times due to the critical nature of the role
- Fault finding, analysis and of logging information for reporting of performance exceptions
- Deployment, automation, management, and maintenance of AWS cloud-based production system.
- Ensuring availability, performance, security, and scalability of AWS production systems.
- Management of creation, release, and configuration of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Pre-production acceptance testing for quality assurance.
- Provision of critical system security by leveraging best practices and prolific cloud security solutions.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Designing, maintenance and management of tools for automation of different operational processes.
Desired Candidate Profile
o Customer oriented personality with good communication skills, who is able to articulate and communicate very effectively verbally as well as in written communications.
o Be a team player that collaborates and shares experience and expertise with the rest of the team.
o Understands database system such as MSSQL, Mongo DB, MySQL, MariaDB, Dynamo DB, RDS.
o Understands Web Servers such as Apache, Ningx.
o Must be RHEL certified.
o In depth knowledge of Linux Commands and Services.
o Efficiency enough to manage all internet applications inclusive FTP, SFTP, Ningx Apache, MySQL, PHP.
o Good communication skill.
o Atleast 3-7 Years of experience in AWS and Devops.
Company Profile:
i2k2 Networks is a trusted name in the IT cloud hosting services industry. We help enterprises with cloud migration, cost optimization, support, and fully managed services which helps them to move faster and scale with lower IT costs. i2k2 Networks offers a complete range of cutting-edge solution that drives the Internet-powered business modules. We excel in:
- Managed IT Services
- Dedicated Web Servers Hosting
- Cloud Solutions
- Email Solutions
- Enterprise Services
- Round the clock Technical Support
https://www.i2k2.com/">https://www.i2k2.com/
Regards
Nidhi Kohli
i2k2 Networks Pvt Ltd.
AM - Talent Acquisition
LogiNext is looking for a technically savvy and passionate Junior DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
Knowledge in building secure, high-performing and scalable infrastructure. Experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 0 to 1 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Knowledge in Linux/Unix Administration and Python/Shell Scripting Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
The candidate must have 2-3 years of experience in the domain. The responsibilities include:
● Deploying system on Linux-based environment using Docker
● Manage & maintain the production environment
● Deploy updates and fixes
● Provide Level 1 technical support
● Build tools to reduce occurrences of errors and improve customer experience
● Develop software to integrate with internal back-end systems
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
● Develop scripts to automate visualization
● Design procedures for system troubleshooting and maintenance
● Experience working on Linux-based infrastructure
● Excellent understanding of MERN Stack, Docker & Nginx (Good to have Node Js)
● Configuration and managing databases such as Mongo
● Excellent troubleshooting
● Experience of working with AWS/Azure/GCP
● Working knowledge of various tools, open-source technologies, and cloud services
● Awareness of critical concepts in DevOps and Agile principles
● Experience of CI/CD Pipeline
Experience: 8-10yrs
Notice Period: max 15days
Must-haves*
1. Knowledge about Database/NoSQL DB hosting fundamentals (RDS multi-AZ, DynamoDB, MongoDB, and such)
2. Knowledge of different storage platforms on AWS (EBS, EFS, FSx) - mounting persistent volumes with Docker Containers
3. In-depth knowledge of Security principles on AWS (WAF, DDoS, Security Groups, NACL's, IAM groups, and SSO)
4. Knowledge on CI/CD platforms is required (Jenkins, GitHub actions, etc.) - Migration of AWS Code pipelines to GitHub actions
5. Knowledge of vast variety of AWS services (SNS, SES, SQS, Athena, Kinesis, S3, ECS, EKS, etc.) is required
6. Knowledge on Infrastructure as Code tool is required We use Cloudformation. (Terraform is a plus), ideally, we would like to migrate to Terraform from CloudFormation
7. Setting CloudWatch Alarms and SMS/Email Slack alerts.
8. Some Knowledge on configuring any kind of monitoring tool such as Prometheus, Dynatrace, etc. (We currently use Datadog, CloudWatch)
9. Experience with any CDN provider configurations (Cloudflare, Fastly, or CloudFront)
10. Experience with either Python or Go scripting language.
11. Experience with Git branching strategy
12. Containers hosting knowledge on both Windows and Linux
The below list is *Nice to Have*
1. Integration experience with Code Quality tools (SonarQube, NetSparker, etc) with CI/CD
2. Kubernetes
3. CDN's other than CloudFront (Cloudflare, Fastly, etc)
4. Collaboration with multiple teams
5. GitOps
- 5+ years hands-on experience with designing, deploying and managing core AWS services and infrastructure
- Proficiency in scripting using Bash, Python, Ruby, Groovy, or similar languages
- Experience in source control management, specifically with Git
- Hands-on experience in Unix/Linux and bash scripting
- Experience building, managing Helm-based build and release CI-CD pipelines for Kubernetes platforms (EKS, Openshift, GKE)
- Strong experience with orchestration and config management tools such as Terraform, Ansible or Cloudformation
- Ability to debug, analyze issues leveraging tools like App Dynamics, New Relic and Sumologic
- Knowledge of Agile Methodologies and principles
- Good writing and documentation skills
- Strong collaborator with the ability to work well with core teammates and our colleagues across STS
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.


