50+ Docker Jobs in India
Apply to 50+ Docker Jobs on CutShort.io. Find your next job, effortlessly. Browse Docker Jobs and apply today!
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
We are looking for a skilled Node.js software developer to work on and scale education ERP solutions. The ideal candidate should have strong hands-on experience with Node.js, databases, and ERP modules. Basic to moderate experience in PHP (Laravel/Core PHP) will be considered an added advantage.
This role offers the benefit of permanent work from home (6 days working).
Key Responsibilities
- Design, develop, and maintain scalable Education ERP modules using Node.js.
- Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fee management, and finance.
- Develop and optimize REST APIs / GraphQL services and handle system integrations.
- Ensure performance, scalability, and security of high-usage ERP systems.
- Collaborate with frontend and product teams for smooth feature delivery.
- Perform code reviews, follow best coding practices, and guide junior developers.
- Continuously explore and suggest improvements using modern backend technologies.
Required Skills & Qualifications
- Strong expertise in Node.js .
- Working knowledge of PHP (Laravel / Core PHP)
- Proficiency in databases: MySQL and MongoDB (PostgreSQL is a plus).
- Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
- Frontend basics: JavaScript, HTML, CSS, and jQuery (React/Vue is a plus).
- Hands-on experience with Git/GitHub, Docker, and CI/CD pipelines.
- Understanding of scalable architecture and secure backend systems.
- 3+ years of overall experience, with at least 2 years working on ERP or large-scale systems.
Preferred Experience
- Prior experience in Education ERP systems.
- Strong understanding of HR, Exams, Inventory, LMS, Admissions, Fees, and Finance modules.
- Experience working on high-traffic or enterprise-level applications.
- Exposure to cloud platforms (AWS / Azure / GCP) is an added advantage.
Senior Full Stack Developer – Analytics Dashboard
Job Summary
We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.
The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.
Key Responsibilities
- Design and develop a full-stack web application using modern technologies.
- Build scalable backend APIs to handle data ingestion, processing, and storage.
- Develop interactive dashboards and data visualisations for business reporting.
- Implement secure user authentication and role-based access.
- Integrate with third-party APIs using OAuth and REST protocols.
- Design efficient database schemas for analytical workloads.
- Implement background jobs and scheduled tasks for data syncing.
- Ensure performance, scalability, and reliability of the system.
- Write clean, maintainable, and well-documented code.
- Collaborate with product and design teams to translate requirements into features.
Required Technical Skills
Frontend
- Strong experience with React.js
- Experience with Next.js
- Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
- Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)
Backend
- Strong experience with Node.js (Express or NestJS)
- REST and/or GraphQL API development
- Background job systems (cron, queues, schedulers)
- Experience with OAuth-based integrations
Database
- Strong experience with PostgreSQL
- Data modelling and performance optimisation
- Writing complex analytical SQL queries
DevOps / Infrastructure
- Cloud platforms (AWS)
- Docker and basic containerisation
- CI/CD pipelines
- Git-based workflows
Experience & Qualifications
- 5+ years of professional full stack development experience.
- Proven experience building production-grade web applications.
- Prior experience with analytics, dashboards, or data platforms is highly preferred.
- Strong problem-solving and system design skills.
- Comfortable working in a fast-paced, product-oriented environment.
Nice to Have (Bonus Skills)
- Experience with data pipelines or ETL systems.
- Knowledge of Redis or caching systems.
- Experience with SaaS products or B2B platforms.
- Basic understanding of data science or machine learning concepts.
- Familiarity with time-series data and reporting systems.
- Familiarity with meta ads/Google ads API
Soft Skills
- Strong communication skills.
- Ability to work independently and take ownership.
- Attention to detail and focus on code quality.
- Comfortable working with ambiguous requirements.
Ideal Candidate Profile (Summary)
A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.
Key Responsibilities:
- 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
- Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Solid experience with Kubernetes and containerized applications.
- Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
- Scripting/programming skills in Python, Shell, or Go for automation.
- Hands-on experience with monitoring, logging, and incident management.
- Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning).
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
Location: Bangalore, India
Company: CyberWarFare Labs
About Us:
CyberWarFare Labs is an Ed-Tech Cyber Security Focused Platform which is totally engrossed in solving the problem of Cybersecurity by providing real-time hands-on manner solutions to problems of B2C & B2B Audience.
Job Overview:
We are looking for a DevOps / Cloud Infrastructure Intern to support our engineering team in managing cloud environments, automation, and infrastructure. This role will provide hands-on exposure to cloud platforms, containerization, networking, and DevOps practices. The intern will work closely with developers, QA, and operations teams to learn how modern infrastructure is provisioned, secured, and optimized for scalability and reliability.
Qualifications and requirements
- Pursuing/Graduated with B.S/B.E /B.Tech/BCA /M.Tech/MCA degree in Computer Science or related field.
- Basic understanding of cloud platforms (AWS, Azure, GCP).
- Familiarity with Docker, Kubernetes, and IaC tools (Terraform/Ansible/CloudFormation) is a plus.
- Knowledge of scripting (Python, Bash, PowerShell).
- Fundamental networking knowledge (IP, routing, subnetting).
- Basic understanding of DevOps, DevSecOps, and CI/CD practices.
- Good problem-solving and collaboration skills.
- Ability to document work clearly and accurately.
Role and Responsibility
- Assist in provisioning and managing infrastructure using Terraform, Ansible, CloudFormation.
- Support deployment and management of applications on AWS, Azure, GCP with Docker & Kubernetes.
- Contribute to CI/CD pipelines and integrate basic DevSecOps practices.
- Write automation scripts in Python, Bash, PowerShell to optimize workflows.
- Learn and support network design (LAN/WAN/WLAN, IP addressing, routing).
- Help monitor performance and troubleshoot issues using tools like Wireshark, ping, traceroute, SNMP.
- Maintain and update documentation (diagrams, configs, topology).
Job Title : Java Backend Developer
Experience : 3 – 8 Years
Location : Pune (Onsite) (Pune candidates Only)
Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)
About the Role :
We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.
The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.
Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.
Key Responsibilities :
- Design, develop, and maintain backend microservices and REST APIs
- Implement data persistence using relational and NoSQL databases
- Ensure performance, scalability, and security of backend systems
- Integrate observability and monitoring tools for production environments
- Work within CI/CD pipelines and containerized deployments
- Collaborate with DevOps, QA, and product teams for feature delivery
- Troubleshoot, optimize, and improve existing modules and services
Mandatory Skills :
- Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
- Databases : MySQL, MongoDB
- Observability : Prometheus, Grafana, Spring Actuators
- Cloud Technologies : AWS
- Containerization Tools : Docker
- CI/CD Tools : Jenkins, GitHub Actions
- Version Control : GitHub
- Operating Systems : Windows 7, Linux
Nice to Have :
- Strong analytical and debugging abilities
- Experience working in Agile/Scrum environments
- Good communication and collaborative skills
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
SimplyFI is a fast-growing AI and blockchain-powered product company transforming trade finance and banking through digital innovation. We are looking for a Full Stack Developer with strong expertise in ReactJS (primary) and solid working knowledge of Python (secondary) to join our team in Thane, Mumbai.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications with ReactJS as the primary
- technology.
- Build and integrate backend services using Python (Flask/Django/FastAPI).
- Develop and manage RESTful APIs for system integration.
- Collaborate on AI-driven product features and support machine learning model integration
- when required.
- Work closely with DevOps teams to deploy, monitor, and optimize applications on AWS.
- Ensure application performance, scalability, security, and code quality.
- Collaborate with product managers, designers, and QA teams to deliver high-quality product
- features.
- Write clean, maintainable, and testable code following best practices.
- Participate in agile processes—code reviews, sprint planning, and daily standups.
Required Skills & Qualifications
- Strong hands-on experience with ReactJS, including hooks, state management, Redux, and API
- integrations.
- Proficient in backend development using Python with frameworks like Flask, Django, or FastAPI.
- Solid understanding of RESTful API design and secure authentication (OAuth2, JWT).
- Experience working with databases: MySQL, PostgreSQL, MongoDB.
- Familiarity with microservices architecture and modern software design patterns.
- Experience with Git, CI/CD pipelines, Docker, and Kubernetes.
- Strong problem-solving, debugging, and performance optimization skills.
Job Title : Senior SDET
Experience : 3 to 5 Years
Contract : 6 Months (C2H) — Onsite
Location : Bangalore
Job Overview :
Seeking a Senior SDET with strong automation experience across frontend, backend, and APIs for distributed microservice-based products.
Key Skills (Mandatory) :
- Golang (preferred) / Java (must be willing to learn Golang)
- Playwright & Selenium (UI Automation)
- REST Assured (API Automation)
- Strong backend automation concepts
- Good understanding of API workflows & service interactions
- Distributed systems experience
- Docker & Kubernetes
- CI/CD exposure
- Ability to debug issues across services
Responsibilities :
- Build & maintain UI/API automation frameworks
- Execute and enhance automated test suites
- Debug failures across microservices & containers
- Integrate automation into CI/CD pipelines
- Collaborate with dev teams to ensure test coverage & quality
Responsibilities:
• Design and develop scalable web application architecture
• Build and maintain robust services and RESTful APIs
• Develop reusable, secure, and high-quality code
• Optimize applications for performance, speed, and scalability
• Implement security and data protection best practices
• Translate UI/UX wireframes into responsive user interfaces
• Integrate front-end and back-end components seamlessly
Skills & Qualifications:
• Software Engineering degree with 3–5 years of experience
• Strong expertise in JavaScript and Node.js
• Good working knowledge of Angular
• Experience with testing frameworks (Jest, Mocha)
• Hands-on experience with Docker and containerization
• Exposure to Cloud platforms (AWS / Azure)
• Understanding of CI/CD pipelines and DevOps practices
• Experience building cloud-based SaaS applications
• Ability to work effectively in cross-functional teams
Generative AI Engineer (Remote)
Experience: 8+ Years
Location: Remote
Industry: HealthTech / Precision Medicine / Data Science
The Opportunity
We are seeking a seasoned Senior Generative AI Engineer to join our mission of unlocking the value of clinical data to improve patient outcomes in emerging markets. Our platform bridges the gap between complex health data and life-saving precision medicine. In this role, you will build the intelligent layer that transforms unstructured, multi-modal clinical data into actionable insights for healthcare providers, pharmaceutical partners, and financial stakeholders.
Core Responsibilities
- Architect GenAI Solutions: Design and deploy scalable Generative AI systems, specifically focusing on RAG (Retrieval-Augmented Generation) architectures to query complex clinical datasets.
- Clinical NLP & Data Structuring: Develop LLM-based pipelines to extract, normalize, and structure data from messy, unstructured medical records and electronic health records (EHR).
- Model Optimization: Fine-tune open-source and proprietary LLMs for medical domain specificity, ensuring high precision, reduced hallucination, and clinical relevance.
- Agentic Workflows: Build AI agents capable of navigating complex healthcare workflows, from patient eligibility screening to longitudinal data analysis.
- Production-Grade Deployment: Collaborate with data engineers to integrate AI models into a robust production environment, ensuring security, data privacy (HIPAA/GDPR compliance), and low-latency performance.
- Leadership & Innovation: Act as a technical subject matter expert, staying ahead of the curve in LLM research and applying "state-of-the-art" techniques to real-world healthcare challenges.
Required Qualifications
- 8+ years of total experience in Software Engineering, Machine Learning, or Data Science.
- Deep GenAI Expertise: Proven track record of building and deploying LLM-powered applications (using frameworks like LangChain, LlamaIndex, or Haystack).
- Strong NLP Background: Significant experience with Natural Language Processing, especially in entity extraction and semantic search.
- Technical Stack: * Proficiency in Python and ML frameworks (PyTorch or TensorFlow).
- Experience with Vector Databases (e.g., Pinecone, Weaviate, Milvus).
- Familiarity with cloud infrastructure (AWS/Azure/GCP) and MLOps practices.
- Data Savvy: Experience working with complex, high-dimensional datasets.
- Education: Master’s or PhD in Computer Science, AI, Mathematics, or a related field (or equivalent deep industry experience).
Preferred Attributes
- Healthcare Experience: Prior experience with clinical data, medical coding (ICD-10, SNOMED), or HealthTech platforms.
- Mission-Driven: A passion for using technology to solve healthcare disparities and improve access to precision medicine.
- Adaptability: Comfortable in a fast-paced, remote-first environment where cross-functional collaboration is key.
To process your resume for the next process, please fill out the Google form with your updated resume.
About JobTwine
JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.
Role Overview
We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment
Key Skills & Requirements
- 4–5 years of hands-on DevOps experience
- Experience in product-based companies and startups
- Strong expertise in CI/CD pipelines
- Hands-on experience with AWS / GCP / Azure
- Experience with Docker & Kubernetes
- Strong knowledge of Linux and Shell scripting
- Infrastructure as Code: Terraform / CloudFormation
- Monitoring & logging: Prometheus, Grafana, ELK stack
- Experience in scalability, reliability and automation
What You Will Do
- Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
- Build, optimize, and scale infrastructure supporting AI-driven products
- Ensure high availability, security and performance of production systems
- Collaborate with engineering teams to improve deployment and release processes
Why Join JobTwine ?
- Direct exposure to leadership and real product decision-making
- Steep learning curve with high ownership and accountability
- Opportunity to build and scale a core B2B SaaS produc
Job Description
Key Responsibilities
- API & Service Development:
- Build RESTful and GraphQL APIs for e-commerce, order management, inventory, pricing, and promotions.
- Database Management:
- Design efficient schemas and optimize performance across SQL and NoSQL data stores.
- Integration Development:
- Implement and maintain integrations with ERP (SAP B1, ERPNext), CRM, logistics, and third-party systems.
- System Performance & Reliability:
- Write scalable, secure, and high-performance code to support real-time retail operations.
- Collaboration:
- Work closely with frontend, DevOps, and product teams to ship new features end-to-end.
- Testing & Deployment:
- Contribute to CI/CD pipelines, automated testing, and observability improvements.
- Continuous Improvement:
- Participate in architecture discussions and propose improvements to scalability and code quality.
Requirements
Required Skills & Experience
- 3–5 years of hands-on backend development experience in Node.js, Python, or Java.
- Strong understanding of microservices, REST APIs, and event-driven architectures.
- Experience with databases such as MySQL/PostgreSQL (SQL) and MongoDB/Redis (NoSQL).
- Hands-on experience with AWS / GCP and containerization (Docker, Kubernetes).
- Familiarity with Git, CI/CD, and code review workflows.
- Good understanding of API security, data protection, and authentication frameworks.
- Strong problem-solving skills and attention to detail.
Nice to Have
- Experience in e-commerce or omnichannel retail platforms.
- Exposure to ERP / OMS / WMS integrations.
- Familiarity with GraphQL, Serverless, or Kafka / RabbitMQ.
- Understanding of multi-brand or multi-country architecture challenges.
Profile: Devops Lead
Location: Gurugram
Experience: 08+ Years
Notice Period: can join Immediate to 1 week
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
Advocate DevOps best practices, automation, and continuous improvement
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
About the Role
Hudson Data is looking for a Senior / Mid-Level SQL Engineer to design, build, optimize, and manage our data platforms. This role requires strong hands-on expertise in SQL, Google Cloud Platform (GCP), and Linux to support high-performance, scalable data solutions.
We are also hiring Python Programers / Software Developers / Front end and Back End Engineers
Key Responsibilities:
1.Develop and optimize complex SQL queries, views, and stored procedures
- Build and maintain data pipelines and ETL workflows on GCP (e.g., BigQuery, Cloud SQL)
- Manage database performance, monitoring, and troubleshooting
- Work extensively in Linux environments for deployments and automation
- Partner with data, product, and engineering teams on data initiatives
Required Skills & Qualifications
Must-Have Skills (Essential)
- Expert GCP mandatory
- Strong Linux / shell scripting mandatory
Nice to Have
- Experience with data warehousing and ETL frameworks
- Python / scripting for automation
- Performance tuning and query optimization experience
Soft Skills
- Strong analytical, problem-solving, and critical-thinking abilities.
- Excellent communication and presentation skills, including data storytelling.
- Curiosity and creativity in exploring and interpreting data.
- Collaborative mindset, capable of working in cross-functional and fast-paced environments.
Education & Certifications
- Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
- Masters degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
⸻
Why Join Hudson Data
At Hudson Data, youll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools from AI and ML frameworks to cloud-based analytics platforms to solve meaningful problems. Youll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
Responsibilities: 1. Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models 2. Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes 3. Automate the training, testing and deployment processes for machine learning models 4. Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability 5. Implement best practices for version control, model reproducibility and governance 6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness 7. Troubleshoot and resolve issues related to model deployment and performance 8. Ensure compliance with security and data privacy standards in all MLOps activities 9. Keep up to date with the latest MLOps tools, technologies and trends 10. Provide support and guidance to other team members on MLOps practices
Required skills and experience: • 3-10 years of experience in MLOps, DevOps or a related field • Bachelor’s degree in computer science, Data Science or a related field • Strong understanding of machine learning principles and model lifecycle management • Experience in Jenkins pipeline development • Experience in automation scripting
We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
- Strong in performing production operation activities even at night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality
- Ability to manage and deliver multiple project phases at the same time
I Qualification(s):
- Experience in working with Linux Server, DevOps tools, and Orchestration tools
- Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add
II Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years of experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
- Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments is a value add
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
- Code Quality like SonarQube is a value-add
- Test Automation like Selenium, JMeter, and JUnit is a value-add
- Experience in Heroku and OpenStack is a value-add
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the on-prem and off-prem environment
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
IV Age Group: 21 – 36 Years
V Cost to the Company: As per industry standards
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Job Title: Technical Lead (Java/Spring Boot/Cloud)
Location: Bangalore
Experience: 8 to 12 Years
Overview
We are seeking a highly accomplished and charismatic Technical Lead to drive thedesign, development, and delivery of high-volume, scalable, and secure enterpriseapplications. The ideal candidate will possess deep expertise in the Java ecosystem, particularly with Spring Boot and Microservices Architecture, coupled with significant
experience in Cloud Solutions (AWS/Azure) and DevOps practices. This role requiresa proven leader capable of setting "big picture" strategy while mentoring a high-performing team.
Key Responsibilities
Architecture Design
- Lead the architecture and design of complex, scalable, and secure cloud-native applications using Java/J2EE and the Spring Boot Framework.
- Design and implement Microservices Architecture and RESTful/SOAP APIs.
- Spearhead Cloud Solution Architecture, including the design and optimization of cloud-based infrastructure deployment with auto-scaling, fault-tolerant, and reliability capabilities (AWS/Azure).
- Guide teams on applying Architecture Concepts, Architectural Styles, and Design Patterns (e.g., UML, Object-Oriented Analysis and Design).
- Solution Architect complex migrations of enterprise applications to Cloud.
- Conduct Proof-of-Concepts (PoC) for new technologies like Blockchain (Hyper Ledger) for solutions such as Identity Management.
Technical Leadership & Development
- Lead the entire software development process from conception to completion within an Agile/Waterfall and Cleanroom Engineering environment.
- Define and enforce best practices and coding standards for Java development, ensuring code quality, security, and performance optimization.
- Implement and manage CI/CD Pipelines &; DevOps Practices to automate software delivery.
- Oversee cloud migration and transformation programs for enterprise applications, focusing on reducing infrastructure costs and improving scalability.
- Troubleshoot and resolve complex technical issues related to the Java/Spring Boot stack, databases (SQL Server, Oracle, My-SQL, Postgres SQL, Elastic Search, Redis), and cloud components.
- Ensure the adoption of Test Driven Development (TDD), Unit Testing, and Mock Test-Driven Development practices.
People & Delivery Management
- Act as a Charismatic people leader and Transformative Force, building and mentoring high-performing teams from the ground up.
- Drive Delivery Management, collaborating with stakeholders to align technical solutions with business objectives and managing large-scale programs from initiation to delivery.
- Utilize Excellent Communication & Presentation Skills to articulate technical strategies to both technical and non-technical stakeholders.
- Champion organizational change, driving adoption of new processes, ways of working, and technology platforms.
Required Technical Skills
- Languages: Java (JDK1.5+), Spring Core Framework, Spring Batch, Java Server Pages (JSP), Servlets, Apache Struts, JSON, Hibernate.
- Cloud: Extensive experience with Amazon Web Services (AWS) (Solution Architect certification preferred) and familiarity with Azure.
- DevOps/Containerization: CI/CD Pipelines, Docker.
- Databases: Strong proficiency in MS SQL Server, Oracle, My-SQL, Postgres SQL, and NoSQL/Caching (Elastic Search, Redis).
Education and Certifications
- Master's or Bachelor's degree in a relevant field.
- Certified Amazon Web Services Solution Architect (or equivalent).
- Experience or certification in leadership is a plus.
Backend Engineer (Python / Django + DevOps)
Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)
About SurgePV
SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.
Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.
As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.
Role Overview
We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.
This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.
Key Responsibilities
- Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
- Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
- Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
- Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
- Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
- Implement caching strategies and performance optimizations where required.
- Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
- Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.
Required Skills & Qualifications (Must-Have)
- 2–5 years of experience as a Backend Engineer.
- Strong proficiency in Python and Django / Django REST Framework.
- Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
- Proven experience designing and maintaining REST APIs in production environments.
- Hands-on DevOps experience, including:
- Docker and containerized services
- CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
- Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
- Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
- Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
- Ownership mindset with the ability to take systems from spec → implementation → production → iteration.
Good-to-Have Skills
- Experience working in early-stage startups or building 0→1 products.
- Familiarity with Kubernetes or other container orchestration tools.
- Experience with Infrastructure as Code (Terraform, Pulumi).
- Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
- Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.
What We Offer
- Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
- Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
- A mission-driven, fast-growing product focused on sustainability and clean energy.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
About PGAGI:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Position Overview:
PGAGI Consultancy Pvt. Ltd. is seeking a proactive and motivated DevOps Intern with around 3-6 months of hands-on experience to support our AI model deployment and infrastructure initiatives. This role is ideal for someone looking to deepen their expertise in DevOps practices tailored to AI/ML environments, including CI/CD automation, cloud infrastructure, containerization, and monitoring.
Key Responsibilities:
AI Model Deployment & Integration
- Assist in containerizing and deploying AI/ML models into production using Docker.
- Support integration of models into existing systems and APIs.
Infrastructure Management
- Help manage cloud and on-premise environments to ensure scalability and consistency.
- Work with Kubernetes for orchestration and environment scaling.
CI/CD Pipeline Automation
- Collaborate on building and maintaining automated CI/CD pipelines (e.g., GitHub Actions, Jenkins).
- Implement basic automated testing and rollback mechanisms.
Hosting & Web Environment Management
- Assist in managing hosting platforms, web servers, and CDN configurations.
- Support DNS, load balancer setups, and ensure high availability of web services.
Monitoring, Logging & Optimization
- Set up and maintain monitoring/logging tools like Prometheus and Grafana.
- Participate in troubleshooting and resolving performance bottlenecks.
Security & Compliance
- Apply basic DevSecOps practices including security scans and access control implementations.
- Follow security and compliance checklists under supervision.
Cost & Resource Management
- Monitor resource usage and suggest cost optimization strategies in cloud environments.
Documentation
- Maintain accurate documentation for deployment processes and incident responses.
Continuous Learning & Innovation
- Suggest improvements to workflows and tools.
- Stay updated with the latest DevOps and AI infrastructure trends.
Requirements:
- Around 6 months of experience in a DevOps or related technical role (internship or professional).
- Basic understanding of Docker, Kubernetes, and CI/CD tools like GitHub Actions or Jenkins.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and monitoring tools (e.g., Prometheus, Grafana).
- Exposure to scripting languages (e.g., Bash, Python) is a plus.
- Strong problem-solving skills and eagerness to learn.
- Good communication and documentation abilities.
Compensation
- Joining Bonus: INR 2,500 one-time bonus upon joining.
- Monthly Stipend: Base stipend of INR 8,000 per month, with the potential to increase up to INR 20,000 based on performance evaluations.
- Performance-Based Pay Scale: Eligibility for monthly performance-based bonuses, rewarding exceptional project contributions and teamwork.
- Additional Benefits: Access to professional development opportunities, including workshops, tech talks, and mentoring sessions.
Ready to kick-start your DevOps journey in a dynamic AI-driven environment? Apply now
#Devops #Docker #Kubernetes #DevOpsIntern
We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.
You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.
You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
- Deploy, manage, and monitor applications on AWS, Azure, or GCP
- Ensure high availability, scalability, and performance of production environments
- Implement security best practices across infrastructure and DevOps workflows
- Automate environment provisioning, deployments, backups, and monitoring
- Configure and manage containerized applications using Docker and Kubernetes
- Collaborate with developers to improve build, release, and deployment processes
- Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
- Perform root cause analysis (RCA) and support production incident response
Required Skills & Experience
- 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with AWS, Azure, or GCP
- Proven experience in setting up and managing CI/CD pipelines
- Proficiency in Docker, Kubernetes, and container orchestration
- Experience with Terraform, Ansible, or similar IaC tools
- Knowledge of monitoring, logging, and alerting systems
- Strong scripting skills using Shell, Bash, or Python
- Good understanding of Git, version control, and branching strategies
- Experience supporting production-grade SaaS or enterprise platforms
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Hiring DevOps Engineers (Freelance)
We’re hiring for our client: Biz-Tech Analytics
Role: DevOps Engineer (Freelance)
Experience: 4-7+ years
Project: Terminus Project
Location: Remote
Engagement Type: Freelance | Project-based
About the Role:
Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.
This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.
Key Responsibilities:
• Work on Linux-based systems, handling process management, file systems, and system utilities
• Write clean, testable Python code for automation and verification
• Build, configure, and manage Docker-based environments for testing and deployment
• Troubleshoot and debug complex system and software issues
• Collaborate using Git and GitHub workflows, including pull requests and branching
• Execute tasks independently and iterate based on structured feedback
Required Skills & Qualifications:
• Expert-level proficiency with Linux CLI, including Bash scripting
• Strong Python programming skills for automation and tooling
• Hands-on experience with Docker and containerized environments
• Excellent problem-solving and debugging skills
• Proficiency with Git and standard GitHub workflows
Preferred Qualifications:
• Professional experience in DevOps or Site Reliability Engineering (SRE)
• Exposure to cloud platforms such as AWS, GCP, or Azure
• Familiarity with machine learning frameworks like TensorFlow or PyTorch
• Prior experience contributing to open-source projects
Engagement Details
• Fully remote freelance engagement
• Flexible workload, with scope to take on additional tasks
• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects
Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8
About Biz-Tech Analytics:
Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.
With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.
Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that trust us with their mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving high-impact engineering problems.
What We Value
Ownership: You take accountability for outcomes, not just tasks.
High Velocity: We iterate fast, learn constantly, and deliver with precision.
Who We Seek
We are looking for a DevOps Intern to join our DevOps team and gain hands-on experience working with real infrastructure, automation pipelines, and deployment environments. You will support CI/CD processes, cloud environments, monitoring, and system reliability while learning industry-standard tools and practices.
We’re seeking someone who is technically curious, eager to learn, and driven to build reliable systems in a fast-paced engineering environment.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Assist in deploying product updates, monitoring system performance, and identifying production issues.
- Contribute to building and improving CI/CD pipelines for automated deployments.
- Support the provisioning, configuration, and maintenance of cloud infrastructure.
- Work with tools like Docker, Jenkins, Git, and monitoring systems to streamline workflows.
- Help automate recurring operational processes using scripting and DevOps tools.
- Participate in backend integrations aligned with product or customer requirements.
- Collaborate with developers, QA, and operations to improve reliability and scalability.
- Gain exposure to containerization, infrastructure-as-code, and cloud platforms.
- Document processes, configurations, and system behaviours to support team efficiency.
- Learn and apply DevOps best practices in real-world environments.
What We’re Looking For
- Hands-on experience or coursework with Docker, Linux, or cloud fundamentals.
- Familiarity with Jenkins, Git, or basic CI/CD concepts.
- Basic understanding of AWS, Azure, or Google Cloud environments.
- Exposure to configuration management tools like Ansible, Puppet, or similar.
- Interest in Kubernetes, Terraform, or infrastructure-as-code practices.
- Ability to write or modify simple shell or Python scripts.
- Strong analytical and troubleshooting mindset.
- Good communication skills with the ability to articulate technical concepts clearly.
- Eagerness to learn, take initiative, and adapt in a fast-moving engineering environment.
- Attention to detail and a commitment to accuracy and reliability.
Benefits
🤝 Work directly with founders and senior engineers.
💪 Contribute to live projects that impact real customers and systems.
💡 Learn tools and practices that engineering programs rarely teach.
🚀 Accelerate your growth through real-world problem solving.
📈 Build a strong DevOps foundation with continuous learning opportunities.
🤗 Thrive in a collaborative environment that encourages experimentation and growth.
🚀 Hiring: Java Developer at Deqode
⭐ Experience: 2+ Years
📍 Location: Mumbai
⭐ Work Mode:- 5 Days Work from Office
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We are looking for a Java Developer (Mid/Senior) to join our Implementation & Application Support team supporting critical fintech platforms. The role involves backend development, application monitoring, incident management, and close collaboration with customers. Senior developers will handle escalations, mentor juniors, and drive operational excellence.
Key Responsibilities (Brief)
✅ Develop and support Java applications (Spring Boot / Quarkus).
✅Monitor applications and resolve production issues.
✅Manage incidents, perform root cause analysis, and handle ITSM tickets.
✅Collaborate with customers and internal teams.
✅(Senior) Lead escalations and mentor junior engineers.
Top Skills Required
✅ Java, Spring Boot, Quarkus
✅Application Support & Incident Management
✅ServiceNow / JIRA / ITSM tools
✅Monitoring & Production Support
✅Kafka, Redis, Solace, Aerospike (Good to have)
✅Docker, Kubernetes, CI/CD (Plus)
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Job Title: AI/ML Engineer (LLMs, RAG & Agent Systems)
Location: Bangalore, India (On-site)
Type: Full-time
About the Role
As an AI/ML Engineer, you’ll be part of a small, fast-moving team focused on developing LLM-powered agentic systems that drive our next generation of AI products.
You’ll work on designing, implementing, and optimizing pipelines involving retrieval-augmented generation (RAG), multi-agent coordination, and tool-using AI systems.
Responsibilities
- Design and implement components for LLM-based systems (retrievers, planners, memory, evaluators).
- Build and maintain RAG pipelines using vector databases and embedding models.
- Experiment with reasoning frameworks like ReAct, Tree of Thought, and Reflexion.
- Collaborate with backend and infra teams to deploy and optimize agentic applications.
- Research and experiment with open-source LLM frameworks to identify best-fit architectures.
- Contribute to internal tools for evaluation, benchmarking, and scaling AI agents.
Required Skills
- Strong foundation in ML/DL theory and implementation (PyTorch preferred).
- Understanding of transformer architectures, embeddings, and LLM mechanics.
- Practical exposure to prompt engineering, tool calling, and structured output design.
- Experience in Python, Git/GitHub, and data processing pipelines.
- Familiarity with RAG systems, vector databases, and API-based model inference.
- Ability to write clean, modular, and reproducible code.
Preferred Skills
- Experience with LangChain, LangGraph, Autogen, or CrewAI.
- Hands-on with Hugging Face ecosystem (transformers, datasets, etc.).
- Working knowledge of Redis, PostgreSQL, or MongoDB.
- Experience with Docker and deployment workflows.
- Familiarity with OpenAI, Anthropic, vLLM, or Ollama inference APIs.
- Exposure to MLOps concepts like CI/CD, model versioning, or cloud (AWS/GCP/Azure).
What We Value
- Deep understanding of core principles over surface-level familiarity with tools.
- Ability to think like a researcher and execute like an engineer.
- Collaborative mindset, building together, learning together.
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Job Summary:
We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.
Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.
Role: Full-Time, Long-Term Required: Python, SQL Preferred: Experience with financial or crypto data
OVERVIEW
We are seeking a data engineer to join as a core member of our technical team. This is a long-term position for someone who wants to build robust, production-grade data infrastructure and grow with a small, focused team. You will own the data layer that feeds our machine learning pipeline—from ingestion and validation through transformation, storage, and delivery.
The ideal candidate is meticulous about data quality, thinks deeply about failure modes, and builds systems that run reliably without constant attention. You understand that downstream ML models are only as good as the data they consume.
CORE TECHNICAL REQUIREMENTS
Python (Required): Professional-level proficiency. You write clean, maintainable code for data pipelines—not throwaway scripts. Comfortable with Pandas, NumPy, and their performance characteristics. You know when to use Python versus push computation to the database.
SQL (Required): Advanced SQL skills. Complex queries, query optimization, schema design, execution plans. PostgreSQL experience strongly preferred. You think about indexing, partitioning, and query performance as second nature.
Data Pipeline Design (Required): You build pipelines that handle real-world messiness gracefully. You understand idempotency, exactly-once semantics, backfill strategies, and incremental versus full recomputation tradeoffs. You design for failure—what happens when an upstream source is late, returns malformed data, or goes down entirely. Experience with workflow orchestration required: Airflow, Prefect, Dagster, or similar.
Data Quality (Required): You treat data quality as a first-class concern. You implement validation checks, anomaly detection, and monitoring. You know the difference between data that is missing versus data that should not exist. You build systems that catch problems before they propagate downstream.
WHAT YOU WILL BUILD
Data Ingestion: Pipelines pulling from diverse sources—crypto exchanges, traditional market feeds, on-chain data, alternative data. Handling rate limits, API quirks, authentication, and source-specific idiosyncrasies.
Data Validation: Checks ensuring completeness, consistency, and correctness. Schema validation, range checks, freshness monitoring, cross-source reconciliation.
Transformation Layer: Converting raw data into clean, analysis-ready formats. Time series alignment, handling different frequencies and timezones, managing gaps.
Storage and Access: Schema design optimized for both write patterns (ingestion) and read patterns (ML training, feature computation). Data lifecycle and retention management.
Monitoring and Alerting: Observability into pipeline health. Knowing when something breaks before it affects downstream systems.
DOMAIN EXPERIENCE
Preference for candidates with experience in financial or crypto data—understanding market data conventions, exchange-specific quirks, and point-in-time correctness. You know why look-ahead bias is dangerous and how to prevent it.
Time series data at scale—hundreds of symbols with years of history, multiple frequencies, derived features. You understand temporal joins, windowed computations, and time-aligned data challenges.
High-dimensional feature stores—we work with hundreds of thousands of derived features. Experience managing, versioning, and serving large feature sets is valuable.
ENGINEERING STANDARDS
Reliability: Pipelines run unattended. Failures are graceful with clear errors, not silent corruption. Recovery is straightforward.
Reproducibility: Same inputs and code version produce identical outputs. You version schemas, track lineage, and can reconstruct historical states.
Documentation: Schemas, data dictionaries, pipeline dependencies, operational runbooks. Others can understand and maintain your systems.
Testing: You write tests for pipelines—validation logic, transformation correctness, edge cases. Untested pipelines are broken pipelines waiting to happen.
TECHNICAL ENVIRONMENT
PostgreSQL, Python, workflow orchestration (flexible on tool), cloud infrastructure (GCP preferred but flexible), Git.
WHAT WE ARE LOOKING FOR
Attention to Detail: You notice when something is slightly off and investigate rather than ignore.
Defensive Thinking: You assume sources will send bad data, APIs will fail, schemas will change. You build accordingly.
Self-Direction: You identify problems, propose solutions, and execute without waiting to be told.
Long-Term Orientation: You build systems you will maintain for years.
Communication: You document clearly, explain data issues to non-engineers, and surface problems early.
EDUCATION
University degree in a quantitative/technical field preferred: Computer Science, Mathematics, Statistics, Engineering. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of a data pipeline you built and maintained, (3) Links to relevant work if available, (4) Availability and timezone.
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
We're Hiring: Golang Developer
Location:Banaglore
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems.
We're Hiring: Golang Developer (3–5 Years Experience)
Location: Mumbai
We are looking for a skilled Golang Developer with strong experience in backend development, microservices, and system-level programming. In this role, you will work on high-performance trading systems, low-latency architecture, and scalable backend solutions.
Key Responsibilities
• Develop and maintain backend services using Golang
• Build scalable, secure, and high-performance microservices
• Work with REST APIs, WebSockets, message queues, and distributed systems
• Collaborate with DevOps, frontend, and product teams for smooth project delivery
• Optimize performance, troubleshoot issues, and ensure system stability
Skills & Experience Required
• 3–5 years of experience in Golang development
• Strong understanding of data structures, concurrency, and networking
• Hands-on experience with MySQL / Redis / Kafka or similar technologies
• Good understanding of microservices architecture, APIs, and cloud environments
• Experience in fintech/trading systems is an added advantage
• Immediate joiners or candidates with up to 30 days notice period preferred
If you are passionate about backend engineering and want to build fast, scalable trading systems
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai



















