
DevOps Engineer
at A Series-B funded, Fintech Company based out of Bangalore.
Requirements and Qualifications
- Bachelor’s degree in Computer Science Engineering or in a related field
- 4+ years of experience
- Excellent analytical and problem-solving skills
- Strong knowledge of Linux systems and internals
- Programming experience in Python/Shell scripting
- Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront, Route53, etc
- Experience in containerization (Docker) and container orchestration (Kubernetes)
- Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform, Helm
- Experience with SQL & NoSQL databases such as MySql, MongoDB, and ElasticSearch
- Debugging and troubleshooting skills using tools such as strace, tcpdump, etc
- Good understanding of networking protocol and security concerns (VPN, VPC, IG, NAT, AZ, Subnet)
- Experience with monitoring and data analysis tools such as Prometheus, EFK, etc
- Good communication & collaboration skills and attention to details
- Participation in rotating on-call duties

Similar jobs
About PGAGI:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.
Key Responsibilities:
AI Model Deployment & Integration
- Assist in containerizing and deploying AI/ML models into production using Docker.
- Support integration of models into existing systems and APIs.
Infrastructure Management
- Help manage cloud and on-premise environments to ensure scalability and consistency.
- Work with Kubernetes for orchestration and environment scaling.
CI/CD Pipeline Automation
- Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
- Implement basic automated testing and rollback mechanisms.
Hosting & Web Environment Management
- Assist in managing hosting platforms, web servers, and CDN configurations.
- Support DNS, load balancer setups, and ensure high availability of web services.
Monitoring, Logging & Optimization
- Set up and maintain monitoring/logging tools like Prometheus and Grafana.
- Participate in troubleshooting and resolving performance bottlenecks.
Security & Compliance
- Apply basic DevSecOps practices including security scans and access control implementations.
- Follow security and compliance checklists under supervision.
Cost & Resource Management
- Monitor resource usage and suggest cost optimization strategies in cloud environments.
Documentation
- Maintain accurate documentation for deployment processes and incident responses.
Continuous Learning & Innovation
- Suggest improvements to workflows and tools.
- Stay updated with the latest DevOps and AI infrastructure trends.
Requirements:
- Around 6 months of experience in a DevOps or related technical role (internship or professional).
- Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
- Exposure to scripting languages (e.g., Bash, Python) is a plus.
- Strong problem-solving skills and eagerness to learn.
- Good communication and documentation abilities.
Compensation
- Joining Bonus: INR 2,500 one-time bonus upon joining.
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now
#Devops #Docker #Kubernetes #DevOpsIntern
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
Job Description
Role Overview:
We're looking for a passionate DevOps engineer with a minimum of 10 years’ experience across all levels, who will work closely with the development teams in Agile setup to continuously improve, support, secure, and operate our production and test environments. We believe in automating our infrastructure as much as possible and pursuing challenging problems in a sustainable and repeatable way.
Our Toolchain
- Ansible, Docker, Kubernetes, Terraform, Gitlab, Jenkins, Fastlane, New Relic, Datadog, SonarQube, IaC
- Apache, Nginx, Linux, Ubuntu, Microservices, Python, Shell, Bash, Helm
- Selenium, Jmeter, Slack, Jira, SAST, OSSEC, OWASP
- Node.JS, PHP, Golang, MySQL, MongoDB, Firebase, Redis, Elastic search,
- VPC, API Gateway, Cognito, DocumentDB, ECS, Lambda, Route53, ACM, S3, EC2, IAM
You'll need:
- Production experience with distributed/scalable systems consisting of multiple microservices and/or high-traffic web applications
- Experience with configuration management systems such as Ansible, Chef, Puppet
- Extensive knowledge of the Linux operating system
- Troubleshooting skills that range from diagnosis to solution for Dev team issues
- Knowledge of how the web works and HTTP fundamentals
- Knowledge of IP networking, DNS, load balancing, and firewalling
Bonus points, if you have:
- Experience in agile development and delivery process.
- Good knowledge of at least one programming language. TecStub uses e.g. Nodes, PHP
- Experience in containerizing applications and deployment to production (Docker, Kubernetes)
- Experience in building modern Terraform infrastructures in cloud environments (AWS, GCP, etc...)
- Experience in analysis of application and database performance monitoring tools (Newrelic, datalog, cluster control, etc..)
- Experience with SQL databases like MySQL, NoSQL, Realtime database stores like Redis, or anything in between.
- Experience being part of the engineering team that built the platform.
- Knowledge of good security practices, including network security, system hardening, secure software, and compliance.
- Familiarity with automated build pipeline / continuous integration using Gitlab and Jenkins and Kubernetes/Docker with this setup, we're deploying to production 2 times per day!
Interview Process:
The entire interview process would take approximately 10 Days.
- HR Screening Call (15 minutes)
- Technical Interview Round Level 1 (30 Minutes)
- Technical Interview Round Level 2 (60 minutes)
- Final Interview Round (60 minutes)
- Offer
About Tecstub:
Tecstub is a renowned global provider of comprehensive digital commerce solutions for some of the world's largest enterprises. With offices in North America and Asia-Pacific, our team offers end-to-end solutions such as strategic Solution Consulting, eCommerce website and application development, and support & maintenance services that are tailored to meet our clients' unique business goals. We are dedicated to delivering excellence by working as an extended partner, providing next-generation solutions that are sustainable, scalable, and future-proof. Our passionate and driven team of professionals has over a decade of experience in the industry and is committed to helping our clients stay ahead of the competition.
We value our employees and strive to create a positive work environment that promotes work-life balance and personal growth. As part of our commitment to our team, we offer a range of benefits to ensure our employees are supported and motivated.
- A 5-day work week that promotes work-life balance and allows our employees to take care of personal responsibilities while excelling in their professional roles.
- 30 annual paid leaves that can be utilized for various personal reasons, such as regional holidays, sick leaves, or any other personal needs. We believe that taking time off is essential for overall well-being and productivity.
- Additional special leaves for birthdays, maternity and paternity events to ensure that our employees can prioritize their personal milestones without any added stress.
- Health insurance coverage of 3 lakhs sum insured for our employees, spouse, and children, to provide peace of mind and security for their health needs.
- Vouchers and gifts for important life events such as birthdays and anniversaries, to celebrate our employees' milestones and show appreciation for their contributions to the company.
- A dedicated learning and growth budget for courses and certifications, to support our employees' career aspirations and encourage professional development.
- Company outings to celebrate our successes together and promote a sense of camaraderie among our team members. We believe that celebrating achievements is an important part of building a positive work culture.
Skills
AWS, Terraform, KUBERNETES, GITHUB, APACHE, BASH, DOCKER, ANSIBLE, GIT, Microservices, UBUNTU, GITLAB, CI/CD, APACHE SERVER, NGINX, NODEJS
Looking for an experienced candidate with strong development and programming experience, knowledge preferred-
- Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
- Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
- Proficient with Unix systems and bash
- Proficient with git/GitHub/GitLab/bitbucket
Desired skills-
- Docker
- Kubernetes
- Jenkins
- Experience in any scripting language (Phyton, Shell Scripting, Java Script)
- NGINX / Load Balancer
- Splunk / ETL tools
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
What you will do:
- Defining and documenting best practices and strategies regarding application deployment and infrastructure maintenance
- Providing guidance, thought leadership and mentorship to development teams to build cloud competencies
- Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design
- Managing cloud environments in accordance with company security guidelines
- Developing and implementing technical efforts to design, build and deploy AWS applications at the direction of lead architects, including large-scale data processing, computationally intensive statistical modeling and advanced analytics
- Participating in all aspects of the software development life cycle for AWS solutions, including planning, requirements, development, testing, and quality assurance
- Troubleshooting incidents, identifying root cause, fixing and documenting problems and implementing preventive measures
- Educating teams on the implementation of new cloud-based initiatives, providing associated training as required
Desired Candidate Profile
What you need to have:- Bachelor’s degree in computer science, information technology
- 2+ years of experience as architect, designing, developing, and implementing cloud solutions on AWS platforms
- Experience in several of the following areas: database architecture, ETL, business intelligence, big data, machine learning, advanced analytic
- Proven ability to collaborate with multi-disciplinary teams of business analysts, developers, data scientists and subject matter experts
- Self-motivation with the ability to drive features to delivery
- Strong analytical and problem solving skills
- Excellent oral and written communication skills
- Good logical sense, strong technical skills and the ability to learn new technologies quickly
- AWS certifications are a plus
- Knowledge of web services, API, REST, and RPC
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash
Location – Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and Responsibility
• Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources.
• Collect and Store logs
• Monitor and Store Logs
• Log Analyze
• Configure Alarm
• Configure Dashboard
• Preparation and following of SOP's, Documentation.
• Good understanding AWS in DevOps.
• Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
• Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,
• Familiarity with Docker, Linux, and Linux security
• Knowledge and experience with container-based architectures like Docker
• Experience on performing troubleshooting on AWS service.
• Experience in configuring services in AWS like EC2, S3, ECS
• Experience with Linux system administration and engineering skills on Cloud infrastructure
• Knowledge of Load Balancers, Firewalls, and network switching components
• Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
• Knowledge of security best practices
• Comfortable 24x7 supporting Production environments
• Strong communication skills
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others











