50+ Docker Jobs in India
Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!
We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
- Strong in performing production operation activities even at night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality
- Ability to manage and deliver multiple project phases at the same time
I Qualification(s):
- Experience in working with Linux Server, DevOps tools, and Orchestration tools
- Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add
II Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years of experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
- Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments is a value add
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
- Code Quality like SonarQube is a value-add
- Test Automation like Selenium, JMeter, and JUnit is a value-add
- Experience in Heroku and OpenStack is a value-add
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the on-prem and off-prem environment
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
IV Age Group: 21 – 36 Years
V Cost to the Company: As per industry standards
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Job Title: Technical Lead (Java/Spring Boot/Cloud)
Location: Bangalore
Experience: 8 to 12 Years
Overview
We are seeking a highly accomplished and charismatic Technical Lead to drive thedesign, development, and delivery of high-volume, scalable, and secure enterpriseapplications. The ideal candidate will possess deep expertise in the Java ecosystem, particularly with Spring Boot and Microservices Architecture, coupled with significant
experience in Cloud Solutions (AWS/Azure) and DevOps practices. This role requiresa proven leader capable of setting "big picture" strategy while mentoring a high-performing team.
Key Responsibilities
Architecture Design
- Lead the architecture and design of complex, scalable, and secure cloud-native applications using Java/J2EE and the Spring Boot Framework.
- Design and implement Microservices Architecture and RESTful/SOAP APIs.
- Spearhead Cloud Solution Architecture, including the design and optimization of cloud-based infrastructure deployment with auto-scaling, fault-tolerant, and reliability capabilities (AWS/Azure).
- Guide teams on applying Architecture Concepts, Architectural Styles, and Design Patterns (e.g., UML, Object-Oriented Analysis and Design).
- Solution Architect complex migrations of enterprise applications to Cloud.
- Conduct Proof-of-Concepts (PoC) for new technologies like Blockchain (Hyper Ledger) for solutions such as Identity Management.
Technical Leadership & Development
- Lead the entire software development process from conception to completion within an Agile/Waterfall and Cleanroom Engineering environment.
- Define and enforce best practices and coding standards for Java development, ensuring code quality, security, and performance optimization.
- Implement and manage CI/CD Pipelines &; DevOps Practices to automate software delivery.
- Oversee cloud migration and transformation programs for enterprise applications, focusing on reducing infrastructure costs and improving scalability.
- Troubleshoot and resolve complex technical issues related to the Java/Spring Boot stack, databases (SQL Server, Oracle, My-SQL, Postgres SQL, Elastic Search, Redis), and cloud components.
- Ensure the adoption of Test Driven Development (TDD), Unit Testing, and Mock Test-Driven Development practices.
People & Delivery Management
- Act as a Charismatic people leader and Transformative Force, building and mentoring high-performing teams from the ground up.
- Drive Delivery Management, collaborating with stakeholders to align technical solutions with business objectives and managing large-scale programs from initiation to delivery.
- Utilize Excellent Communication & Presentation Skills to articulate technical strategies to both technical and non-technical stakeholders.
- Champion organizational change, driving adoption of new processes, ways of working, and technology platforms.
Required Technical Skills
- Languages: Java (JDK1.5+), Spring Core Framework, Spring Batch, Java Server Pages (JSP), Servlets, Apache Struts, JSON, Hibernate.
- Cloud: Extensive experience with Amazon Web Services (AWS) (Solution Architect certification preferred) and familiarity with Azure.
- DevOps/Containerization: CI/CD Pipelines, Docker.
- Databases: Strong proficiency in MS SQL Server, Oracle, My-SQL, Postgres SQL, and NoSQL/Caching (Elastic Search, Redis).
Education and Certifications
- Master's or Bachelor's degree in a relevant field.
- Certified Amazon Web Services Solution Architect (or equivalent).
- Experience or certification in leadership is a plus.
Backend Engineer (Python / Django + DevOps)
Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)
About SurgePV
SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.
Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.
As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.
Role Overview
We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.
This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.
Key Responsibilities
- Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
- Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
- Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
- Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
- Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
- Implement caching strategies and performance optimizations where required.
- Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
- Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.
Required Skills & Qualifications (Must-Have)
- 2–5 years of experience as a Backend Engineer.
- Strong proficiency in Python and Django / Django REST Framework.
- Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
- Proven experience designing and maintaining REST APIs in production environments.
- Hands-on DevOps experience, including:
- Docker and containerized services
- CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
- Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
- Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
- Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
- Ownership mindset with the ability to take systems from spec → implementation → production → iteration.
Good-to-Have Skills
- Experience working in early-stage startups or building 0→1 products.
- Familiarity with Kubernetes or other container orchestration tools.
- Experience with Infrastructure as Code (Terraform, Pulumi).
- Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
- Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.
What We Offer
- Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
- Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
- A mission-driven, fast-growing product focused on sustainability and clean energy.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
Proficient in Ruby on Rails framework and its ecosystem.
Strong understanding of database management (PostgreSQL) and caching mechanisms (Redis).
Familiarity with Docker, GitHub Actions, and deployment on platforms like AWS (EKS).
Experience with background job processing using Sidekiq.
Preferred Qualifications ·
Experience with microservices architecture.
Knowledge of monitoring tools (e.g., Datadog) for application performance tracking.
Familiarity with Agile methodologies and version control systems (e.g., Git).
About PGAGI:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.
Key Responsibilities:
AI Model Deployment & Integration
- Assist in containerizing and deploying AI/ML models into production using Docker.
- Support integration of models into existing systems and APIs.
Infrastructure Management
- Help manage cloud and on-premise environments to ensure scalability and consistency.
- Work with Kubernetes for orchestration and environment scaling.
CI/CD Pipeline Automation
- Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
- Implement basic automated testing and rollback mechanisms.
Hosting & Web Environment Management
- Assist in managing hosting platforms, web servers, and CDN configurations.
- Support DNS, load balancer setups, and ensure high availability of web services.
Monitoring, Logging & Optimization
- Set up and maintain monitoring/logging tools like Prometheus and Grafana.
- Participate in troubleshooting and resolving performance bottlenecks.
Security & Compliance
- Apply basic DevSecOps practices including security scans and access control implementations.
- Follow security and compliance checklists under supervision.
Cost & Resource Management
- Monitor resource usage and suggest cost optimization strategies in cloud environments.
Documentation
- Maintain accurate documentation for deployment processes and incident responses.
Continuous Learning & Innovation
- Suggest improvements to workflows and tools.
- Stay updated with the latest DevOps and AI infrastructure trends.
Requirements:
- Around 6 months of experience in a DevOps or related technical role (internship or professional).
- Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
- Exposure to scripting languages (e.g., Bash, Python) is a plus.
- Strong problem-solving skills and eagerness to learn.
- Good communication and documentation abilities.
Compensation
- Joining Bonus: INR 2,500 one-time bonus upon joining.
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now
#Devops #Docker #Kubernetes #DevOpsIntern
We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.
You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.
You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
- Deploy, manage, and monitor applications on AWS, Azure, or GCP
- Ensure high availability, scalability, and performance of production environments
- Implement security best practices across infrastructure and DevOps workflows
- Automate environment provisioning, deployments, backups, and monitoring
- Configure and manage containerized applications using Docker and Kubernetes
- Collaborate with developers to improve build, release, and deployment processes
- Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
- Perform root cause analysis (RCA) and support production incident response
Required Skills & Experience
- 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with AWS, Azure, or GCP
- Proven experience in setting up and managing CI/CD pipelines
- Proficiency in Docker, Kubernetes, and container orchestration
- Experience with Terraform, Ansible, or similar IaC tools
- Knowledge of monitoring, logging, and alerting systems
- Strong scripting skills using Shell, Bash, or Python
- Good understanding of Git, version control, and branching strategies
- Experience supporting production-grade SaaS or enterprise platforms
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Hiring DevOps Engineers (Freelance)
We’re hiring for our client: Biz-Tech Analytics
Role: DevOps Engineer (Freelance)
Experience: 4-7+ years
Project: Terminus Project
Location: Remote
Engagement Type: Freelance | Project-based
About the Role:
Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.
This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.
Key Responsibilities:
• Work on Linux-based systems, handling process management, file systems, and system utilities
• Write clean, testable Python code for automation and verification
• Build, configure, and manage Docker-based environments for testing and deployment
• Troubleshoot and debug complex system and software issues
• Collaborate using Git and GitHub workflows, including pull requests and branching
• Execute tasks independently and iterate based on structured feedback
Required Skills & Qualifications:
• Expert-level proficiency with Linux CLI, including Bash scripting
• Strong Python programming skills for automation and tooling
• Hands-on experience with Docker and containerized environments
• Excellent problem-solving and debugging skills
• Proficiency with Git and standard GitHub workflows
Preferred Qualifications:
• Professional experience in DevOps or Site Reliability Engineering (SRE)
• Exposure to cloud platforms such as AWS, GCP, or Azure
• Familiarity with machine learning frameworks like TensorFlow or PyTorch
• Prior experience contributing to open-source projects
Engagement Details
• Fully remote freelance engagement
• Flexible workload, with scope to take on additional tasks
• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects
Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8
About Biz-Tech Analytics:
Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.
With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.
Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that trust us with their mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving high-impact engineering problems.
What We Value
Ownership: You take accountability for outcomes, not just tasks.
High Velocity: We iterate fast, learn constantly, and deliver with precision.
Who We Seek
We are looking for a DevOps Intern to join our DevOps team and gain hands-on experience working with real infrastructure, automation pipelines, and deployment environments. You will support CI/CD processes, cloud environments, monitoring, and system reliability while learning industry-standard tools and practices.
We’re seeking someone who is technically curious, eager to learn, and driven to build reliable systems in a fast-paced engineering environment.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Assist in deploying product updates, monitoring system performance, and identifying production issues.
- Contribute to building and improving CI/CD pipelines for automated deployments.
- Support the provisioning, configuration, and maintenance of cloud infrastructure.
- Work with tools like Docker, Jenkins, Git, and monitoring systems to streamline workflows.
- Help automate recurring operational processes using scripting and DevOps tools.
- Participate in backend integrations aligned with product or customer requirements.
- Collaborate with developers, QA, and operations to improve reliability and scalability.
- Gain exposure to containerization, infrastructure-as-code, and cloud platforms.
- Document processes, configurations, and system behaviours to support team efficiency.
- Learn and apply DevOps best practices in real-world environments.
What We’re Looking For
- Hands-on experience or coursework with Docker, Linux, or cloud fundamentals.
- Familiarity with Jenkins, Git, or basic CI/CD concepts.
- Basic understanding of AWS, Azure, or Google Cloud environments.
- Exposure to configuration management tools like Ansible, Puppet, or similar.
- Interest in Kubernetes, Terraform, or infrastructure-as-code practices.
- Ability to write or modify simple shell or Python scripts.
- Strong analytical and troubleshooting mindset.
- Good communication skills with the ability to articulate technical concepts clearly.
- Eagerness to learn, take initiative, and adapt in a fast-moving engineering environment.
- Attention to detail and a commitment to accuracy and reliability.
Benefits
🤝 Work directly with founders and senior engineers.
💪 Contribute to live projects that impact real customers and systems.
💡 Learn tools and practices that engineering programs rarely teach.
🚀 Accelerate your growth through real-world problem solving.
📈 Build a strong DevOps foundation with continuous learning opportunities.
🤗 Thrive in a collaborative environment that encourages experimentation and growth.
The Senior Developer will work on core product features, uphold architectural standards, and collaborate closely with the Team Leader to deliver high-quality, scalable solutions.
Responsibilities:
- Develop and maintain major product features
- Follow technical guidance from the Team Leader
- Ensure adherence to coding standards and system architecture
- Perform thorough testing of assigned tasks
Required Skills:
- Strong expertise in Next.js, React.js, Express.js, and PostgreSQL
- Solid understanding of system design, clean code practices, and performance optimization
🚀 Hiring: Java Developer at Deqode
⭐ Experience: 2+ Years
📍 Location: Mumbai
⭐ Work Mode:- 5 Days Work from Office
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We are looking for a Java Developer (Mid/Senior) to join our Implementation & Application Support team supporting critical fintech platforms. The role involves backend development, application monitoring, incident management, and close collaboration with customers. Senior developers will handle escalations, mentor juniors, and drive operational excellence.
Key Responsibilities (Brief)
✅ Develop and support Java applications (Spring Boot / Quarkus).
✅Monitor applications and resolve production issues.
✅Manage incidents, perform root cause analysis, and handle ITSM tickets.
✅Collaborate with customers and internal teams.
✅(Senior) Lead escalations and mentor junior engineers.
Top Skills Required
✅ Java, Spring Boot, Quarkus
✅Application Support & Incident Management
✅ServiceNow / JIRA / ITSM tools
✅Monitoring & Production Support
✅Kafka, Redis, Solace, Aerospike (Good to have)
✅Docker, Kubernetes, CI/CD (Plus)
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Job Title: AI/ML Engineer (LLMs, RAG & Agent Systems)
Location: Bangalore, India (On-site)
Type: Full-time
About the Role
As an AI/ML Engineer, you’ll be part of a small, fast-moving team focused on developing LLM-powered agentic systems that drive our next generation of AI products.
You’ll work on designing, implementing, and optimizing pipelines involving retrieval-augmented generation (RAG), multi-agent coordination, and tool-using AI systems.
Responsibilities
- Design and implement components for LLM-based systems (retrievers, planners, memory, evaluators).
- Build and maintain RAG pipelines using vector databases and embedding models.
- Experiment with reasoning frameworks like ReAct, Tree of Thought, and Reflexion.
- Collaborate with backend and infra teams to deploy and optimize agentic applications.
- Research and experiment with open-source LLM frameworks to identify best-fit architectures.
- Contribute to internal tools for evaluation, benchmarking, and scaling AI agents.
Required Skills
- Strong foundation in ML/DL theory and implementation (PyTorch preferred).
- Understanding of transformer architectures, embeddings, and LLM mechanics.
- Practical exposure to prompt engineering, tool calling, and structured output design.
- Experience in Python, Git/GitHub, and data processing pipelines.
- Familiarity with RAG systems, vector databases, and API-based model inference.
- Ability to write clean, modular, and reproducible code.
Preferred Skills
- Experience with LangChain, LangGraph, Autogen, or CrewAI.
- Hands-on with Hugging Face ecosystem (transformers, datasets, etc.).
- Working knowledge of Redis, PostgreSQL, or MongoDB.
- Experience with Docker and deployment workflows.
- Familiarity with OpenAI, Anthropic, vLLM, or Ollama inference APIs.
- Exposure to MLOps concepts like CI/CD, model versioning, or cloud (AWS/GCP/Azure).
What We Value
- Deep understanding of core principles over surface-level familiarity with tools.
- Ability to think like a researcher and execute like an engineer.
- Collaborative mindset, building together, learning together.
About the role
We’re looking for a hands-on Junior System/Cloud Engineer to help keep our cloud infrastructure and internal IT humming. You’ll work closely with a senior IT/Cloud Engineer across Azure, AWS, and Google Cloud—provisioning VMs/EC2/Compute Engine instances, basic database setup, and day-to-day maintenance. You’ll also install and maintain team tools (e.g., Elasticsearch, Tableau, MSSQL) and pitch in with end-user support for laptops and any other system issues when needed.
What you’ll do
Cloud provisioning & operations (multi-cloud)
· Create, configure, and maintain virtual machines and related resources in Azure, AWS (EC2), and Google Compute Engine (networks, security groups/firewalls, storage, OS patching and routine maintenance/backups).
· Assist with database setup (managed services or VM-hosted), backups, and access controls under guidance from senior engineers.
· Implement tagging, least-privilege IAM, and routine patching for compliance and cost hygiene.
Tooling installation & maintenance
· Install, configure, upgrade, and monitor tools/softwares required like Elasticsearch and Tableau server and any other and manage service lifecycle (systemd/Windows service), security basics, and health checks.
· Document installation steps, configurations, and runbooks.
Monitoring & incident support
· Set up basic monitoring/alerts using cloud-native tools and logs.
· End-user & endpoint support (as needed)
· Provide first-line support for laptops/desktops (Windows/macOS), peripherals, conferencing, VPN, and common apps; escalate when appropriate.
· Assist with device setup, imaging, patching, and inventory; keep tickets and resolutions well documented.
What you’ll bring
· Experience: 1–3 years in DevOps/System Admin/IT Ops with exposure to at least one major cloud (primarily Azure, good to have skills on AWS, or GCP); eagerness to work across all three.
Core skills:
· Linux and Windows server admin basics (users, services, networking, storage).
· VM provisioning and troubleshooting in Azure, AWS EC2, or GCE; understanding of security groups/firewalls, SSH/RDP, and snapshots/images.
· Installation/maintenance of team tools (ex: Elasticsearch, Tableau Server etc).
· Scripting (Bash and/or PowerShell); Git fundamentals; comfort with ticketing systems.
· Clear documentation and communication habits.
Nice to have:
· Terraform or ARM/CloudFormation basics; container fundamentals (Docker).
· Monitoring/logging familiarity (Elastic/Azure Monitor).
· Basic networking (DNS, HTTP, TLS, VPN, Nginx)
· Azure certification (AZ-104, AZ-204)
To process your resume for the next process, please fill out the Google form with your updated resume.
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Job Summary:
We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.
Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
About the Role
We are seeking a highly skilled and experienced AI Ops Engineer to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and efficiency of our AI/ML systems in production. You will work at the intersection of software engineering, machine learning, and DevOps— helping to design, deploy, and manage AI/ML models and pipelines that power mission-critical business applications.
The ideal candidate has hands-on experience in AI/ML operations and orchestrating complex data pipelines, a strong understanding of cloud-native technologies, and a passion for building robust, automated, and scalable systems.
Key Responsibilities
- AI/ML Systems Operations: Develop and manage systems to run and monitor production AI/ML workloads, ensuring performance, availability, cost-efficiency and convenience.
- Deployment & Automation: Build and maintain ETL, ML and Agentic pipelines, ensuring reproducibility and smooth deployments across environments.
- Monitoring & Incident Response: Design observability frameworks for ML systems (alerts and notifications, latency, cost, etc.) and lead incident triage, root cause analysis, and remediation.
- Collaboration: Partner with data scientists, ML engineers, and software engineers to operationalize models at scale.
- Optimization: Continuously improve infrastructure, workflows, and automation to reduce latency, increase throughput, and minimize costs.
- Governance & Compliance: Implement MLOps best practices, including versioning, auditing, security, and compliance for data and models.
- Leadership: Mentor junior engineers and contribute to the development of AI Ops standards and playbooks.
Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- 4+ years of experience in AI/MLOps, DevOps, SRE, Data Engineering, or with at least 2+ years in AI/ML-focused operations.
- Strong expertise with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker).
- Hands-on experience with ML pipelines and frameworks (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI, etc.).
- Proficiency in Python and/or other scripting languages for automation.
- Familiarity with monitoring/observability tools (Prometheus, Grafana, Datadog, ELK, etc.).
- Deep understanding of CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm, etc.).
- Knowledge of data governance, model drift detection, and compliance in AI systems.
- Excellent problem-solving, communication, and collaboration skills.
Nice-to-Have
- Experience in large-scale distributed systems and real-time data streaming (Kafka, Flink, Spark).
- Familiarity with data science concepts, and frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
- Full Stack Development knowledge to collaborate effectively across end-to-end solution delivery
- Contributions to open-source MLOps/AI Ops tools or platforms.
- Exposure to Responsible AI practices, model fairness, and explainability frameworks
Why Join Us
- Opportunity to shape and scale AI/ML operations in a fast-growing, innovation-driven environment.
- Work alongside leading data scientists and engineers on cutting-edge AI solutions.
- Competitive compensation, benefits, and career growth opportunities.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
What you'll be doing:
As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.
- Design & Implement: Lead the design, implementation and management of Trential’s products.
- Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
- Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
- Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
- Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
- 3+ years of experience in backend development.
- Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
- Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
- Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
- Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
- Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
- Experience integrating AI/ML models into verification or data extraction workflows.
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
We're Hiring: Golang Developer
Location:Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Job Title:Full Stack Developer
Location: Bangalore, India
About Us:
Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.
Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.
Role Overview:
As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.
Your Core Impact
- You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
- You’ll translate learning and AI requirements into tangible, performant product features.
- Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.
Key Responsibilities:
Platform Architecture & Backend Development
- Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
- Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
- Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
- Ensure security, uptime, and performance across all services.
Front-End Development & User Experience
- Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
- Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
- Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.
AI Integration & Support
- Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
- Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
- Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.
Data, Analytics & Reporting
- Build dashboards and data views for educators and product teams to derive insights from learner behavior.
- Implement secure data storage and export pipelines for progress analytics.
Collaboration & Engineering Culture
- Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
- Participate in code reviews, architectural discussions, and system design decisions.
- Help define engineering best practices that balance innovation, maintainability, and performance.
Required Qualifications & Skills
- 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
- Strong proficiency in Python or Node.js for backend services.
- Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
- Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
- Experience with real-time data systems (WebSockets or event-driven architectures).
- Exposure to AI/ML integrations or data-intensive backends.
- Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
- Strong problem-solving mindset and attention to detail.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems, share your resume.
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Job Title: DevOps Engineer
Location: Mumbai
Experience: 2–4 Years
Department: Technology
About InCred
InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]
Role Overview
As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.
Key Responsibilities
- Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
- CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
- Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
- Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
- Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
- Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
- Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
- Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]
Required Skills
- 2–4 years of hands-on experience in DevOps roles.
- Strong knowledge of Linux administration and shell scripting (Bash/Python).
- Experience with AWS services and cloud architecture.
- Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
- Familiarity with Docker, Kubernetes, and container orchestration.
- Knowledge of Terraform or similar IaC tools.
- Understanding of networking, security, and performance tuning.
- Exposure to monitoring tools (Prometheus, Grafana) and log management.
Preferred Qualifications
- Experience in financial services or fintech environments.
- Knowledge of microservices architecture and enterprise-grade SaaS setups.
- Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).
Why Join InCred?
- Culture: High-performance, ownership-driven, and innovation-focused environment.
- Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
- Rewards: Competitive compensation, ESOPs, and performance-based incentives.
- Impact: Be part of a mission-driven organization transforming India’s credit landscape.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What We Expect:
• You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.
• You thrive on challenges, not on perks or financial rewards.
• You measure success by your own growth, not external validation.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading
environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What You Will Do:
• Develop and optimize high-performance backend systems in Golang for trading platforms and financial
services.
• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.
• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.
• Own your work—no babysitting, no micromanagement.
• Work alongside equally driven engineers who expect nothing less than brilliance.
• Learn faster than you ever thought possible.
Must-Have Skills:
• Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).
• Deep understanding of concurrency, memory management, and system design.
• Experience with Trading, market data processing, or low-latency systems.
• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.
• Hands-on with Docker, Kubernetes, and CI/CD pipelines.
• A portfolio of work that speaks louder than a resume.
Nice-to-Have Skills:
• Past experience in fintech.
• Contributions to open-source Golang projects.
• A history of building something impactful from scratch.
• Understanding of FIX protocol, WebSockets, and streaming APIs.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home


















