50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
We are seeking a Data Engineer with 3–4 years of relevant experience to join our team. The ideal candidate should have strong expertise in Python and SQL and be available to join immediately.
Location: Bangalore
Experience: 3–4 Years
Joining: Immediate Joiner preferred
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and data models
- Extract, transform, and load (ETL) data from multiple sources
- Write efficient and optimized SQL queries for data analysis and reporting
- Develop data processing scripts and automation using Python
- Ensure data quality, integrity, and performance across systems
- Collaborate with cross-functional teams to support business and analytics needs
- Troubleshoot data-related issues and optimize existing processes
Required Skills & Qualifications:
- 3–4 years of hands-on experience as a Data Engineer or similar role
- Strong proficiency in Python and SQL
- Experience working with relational databases and large datasets
- Good understanding of data warehousing and ETL concepts
- Strong analytical and problem-solving skills
- Ability to work independently and in a team-oriented environment
Preferred:
- Experience with cloud platforms or data tools (added advantage)
- Exposure to performance tuning and data optimization
AI-Native Software Developer Intern
Build real AI agents used daily across the company
We’re looking for a high-agency, AI-native software developer intern to help us build internal AI agents that improve productivity across our entire company (80–100 people using them daily).
You will ship real systems, used by real teams, with real impact.
If you’ve never built anything outside coursework, this role is probably not a fit.
What You’ll Work On
You will work directly on designing, building, deploying, and iterating AI agents that power internal workflows.
Examples of problems you may tackle:
Internal AI agents for:
- Knowledge retrieval across Notion / docs / Slack
- Automated report generation
- Customer support assistance
- Process automation (ops, hiring, onboarding, etc.)
- Decision-support copilots
- Prompt engineering + structured outputs + tool-using agents
Building workflows using:
- LLM APIs
- Vector databases
- Agent frameworks
- Internal dashboards
- Improving reliability, latency, cost, and usability of AI systems
- Designing real UX around AI tools (not just scripts)
You will own features end-to-end:
- Problem understanding
- Solution design
- Implementation
- Testing
- Deployment
- Iteration based on user feedback
What We Expect From You
You must:
- Be AI-native: you actively use tools like:
- ChatGPT / Claude / Cursor / Copilot
- AI for debugging, scaffolding, refactoring
- Prompt iteration
- Rapid prototyping
- Be comfortable with at least one programming language (Python, TypeScript, JS, etc.)
- Have strong critical thinking
- You question requirements
- You think about edge cases
- You optimize systems, not just make them “work”
- Be high agency
- You don’t wait for step-by-step instructions
- You proactively propose solutions
- You take ownership of outcomes
- Be able to learn fast on the job
Help will be provided but you will not be spoonfed.
Absolute Requirement (Non-Negotiable)
If you have not built any side projects with a visible output, you will most likely be rejected.
We expect at least one of:
- A deployed web app
- A GitHub repo with meaningful commits
- A working AI tool
- A live demo link
- A product you built and shipped
- An agent, automation, bot, or workflow you created
Bonus Points (Strong Signals)
These are not required but will strongly differentiate you:
- Built projects using:
- LLM APIs (OpenAI, Anthropic, etc.)
- LangChain / LlamaIndex / custom agent frameworks
- Vector DBs like Pinecone, Weaviate, FAISS
- RAG systems
- Experience deploying:
- Vercel, Fly.io, Render, AWS, etc.
- Built internal tools for a team before
- Strong product intuition (you care about UX, not just code)
- Experience automating your own workflows using scripts or AI
What You’ll Gain
You will get:
- Real experience building AI agents used daily
- Ownership over production systems
- Deep exposure to:
- AI architecture
- Product thinking
- Iterative engineering
- Tradeoffs (cost vs latency vs accuracy)
- A portfolio that actually means something in 2026
- A strong shot at long-term roles based on performance
If you perform well, you won’t leave with a certificate, you'll leave with real-world building experience.
Who This Is Perfect For
- People who already build things for fun
- People who automate their own life with scripts/tools
- People who learn by shipping
- People who prefer responsibility over structure
- People who are excited by ambiguity
Who This Is Not For
Be honest with yourself:
- If you need step-by-step instructions
- If you avoid open-ended problems
- If you’ve never built anything outside assignments
- If you dislike using AI tools while coding
This will be frustrating for you.
How To Apply
Send:
- Your GitHub
- Links to projects (deployed preferred)
- A short note explaining:
- What you built
- Why you built it
- What you’d improve if you had more time
Strong portfolios beat strong resumes.

Build and maintain scalable web applications using Python + Django
Develop REST APIs using Django REST Framework (DRF) for internal and partner integrations
Work on frontend screens (templates / HTML / CSS / JS) and integrate APIs in the UI
Implement authentication/authorization, validations, and secure coding practices
Work with databases (MySQL/PostgreSQL), ORM, migrations, indexing, and query optimization
Deploy and manage apps on Azure (App Service / VM / Storage / Azure SQL as applicable)
Integrate third-party services (payment, SMS/email, partner APIs) when required
Write clean, maintainable code, and support production debugging & performance improvements
Collaborate with product/ops teams to deliver features on time
Must Have Skills
- Python, Django (2–4 years hands-on)
- Django REST Framework (DRF) – building and consuming REST APIs
- Strong understanding of SQL and relational databases (MySQL/PostgreSQL)
- Frontend basics: HTML, CSS, JavaScript, Bootstrap (enough to handle screens + API integration)
- Experience with Git and standard development workflows
- Comfortable working on deployments and environments on Azure
Good to Have (Preferred)
- Azure exposure: App Service, Azure Storage, Azure SQL, Key Vault, CI/CD (Azure DevOps)
- Background jobs: Celery / Redis or cron-based scheduling
- Basic understanding of security practices: JWT/session auth, permissions, rate limiting
- Experience in fintech / gift cards / loyalty / voucher systems is a plus
- Unit testing (pytest/Django test framework) and basic logging/monitoring
About the Role:
We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.
Key Responsibilities:
Data Modelling & Visualization
- Build scalable and high-quality data models in Power BI using best practices.
- Define relationships, hierarchies, and measures to support effective storytelling.
- Ensure dashboards meet standards in accuracy, visualization principles, and timelines.
Data Transformation & ETL
- Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
- Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
- Manage and process large-scale datasets from various sources and formats.
Business Problem Translation
- Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
- Decompose business questions into testable hypotheses and identify relevant datasets for validation.
Performance & Troubleshooting
- Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
- Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.
Analytical Storytelling
- Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
- Develop data narratives that drive business impact.
Solution Design
- Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.
Required Skills & Experience:
- Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
- Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
- Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
- Familiarity with DAX Studio and Tabular Editor.
- Experience in handling high-volume data in production environments.
Preferred (Good to Have):
- Exposure to Big Data technologies such as:
- PySpark
- Hadoop
- Hive / HDFS
- Spark Streaming (optional but preferred)
Why Join Us?
- Work with a team that's passionate about data innovation.
- Exposure to modern data stack and tools.
- Flat structure and collaborative culture.
- Opportunity to influence data strategy and architecture decisions.
Database Programmer (SQL & Python)
Experience: 4 – 5 Years
Location: Remote
Employment Type: Full-Time
About the Opportunity
We are a mission-driven HealthTech organization dedicated to bridging the gap in global healthcare equity. By harnessing the power of AI-driven clinical insights and real-world evidence, we help healthcare providers and pharmaceutical companies deliver precision medicine to underrepresented populations.
We are looking for a skilled Database Programmer with a strong blend of SQL expertise and Python automation skills to help us manage, transform, and unlock the value of complex clinical data. This is a fully remote role where your work will directly contribute to improving patient outcomes and making life-saving treatments more affordable and accessible.
Key Responsibilities
- Data Architecture & Management: Design, develop, and maintain robust relational databases to store large-scale, longitudinal patient records and clinical data.
- Complex Querying: Write and optimize sophisticated SQL queries, stored procedures, and triggers to handle deep clinical datasets, ensuring high performance and data integrity.
- Python Automation: Develop Python scripts and ETL pipelines to automate data ingestion, cleaning, and transformation from diverse sources (EHRs, lab reports, and unstructured clinical notes).
- AI Support: Collaborate with Data Scientists to prepare datasets for AI-based analytics, Knowledge Graphs, and predictive modeling.
- Data Standardization: Map and transform clinical data into standardized models (such as HL7, FHIR, or proprietary formats) to ensure interoperability across healthcare ecosystems.
- Security & Compliance: Implement and maintain rigorous data security protocols, ensuring all database activities comply with global healthcare regulations (e.g., HIPAA, GDPR).
Required Skills & Qualifications
- Education: Bachelor’s degree in Computer Science, Information Technology, Statistics, or a related field.
- SQL Mastery: 4+ years of experience with relational databases (PostgreSQL, MySQL, or MS SQL Server). You should be comfortable with performance tuning and complex data modeling.
- Python Proficiency: Strong programming skills in Python, particularly for data manipulation (Pandas, NumPy) and database interaction (SQLAlchemy, Psycopg2).
- Healthcare Experience: Familiarity with healthcare data standards (HL7, FHIR) or experience working with Electronic Health Records (EHR) is highly preferred.
- ETL Expertise: Proven track record of building and managing end-to-end data pipelines for structured and unstructured data.
- Analytical Mindset: Ability to troubleshoot complex data issues and translate business requirements into efficient technical solutions.
To process your details please fill-out the google form.
About Company (GeniWay)
GeniWay Technologies is pioneering India’s first AI-native platform for personalized learning and career guidance, transforming the way students learn, grow, and determine their future path. Addressing challenges in the K-12 system such as one-size-fits-all teaching and limited career awareness, GeniWay leverages cutting-edge AI to create a tailored educational experience for every student. The core technology includes an AI-powered learning engine, a 24x7 multilingual virtual tutor and Clario, a psychometrics-backed career guidance system. Aligned with NEP 2020 policies, GeniWay is on a mission to make high-quality learning accessible to every student in India, regardless of their background or region.
What you’ll do
- Build the career assessment backbone: attempt lifecycle (create/resume/submit), timing metadata, partial attempts, idempotent APIs.
- Implement deterministic scoring pipelines with versioning and audit trails (what changed, when, why).
- Own Postgres data modeling: schemas, constraints, migrations, indexes, query performance.
- Create safe, structured GenAI context payloads (controlled vocabulary, safety flags, eval datasets) to power parent/student narratives.
- Raise reliability: tests for edge cases, monitoring, reprocessing/recalculation jobs, safe logging (no PII leakage).
Must-have skills
- Backend development in Python (FastAPI/Django/Flask) or Node (NestJS) with production API experience.
- Strong SQL + PostgreSQL fundamentals (transactions, indexes, schema design, migrations).
- Testing discipline: unit + integration tests for logic-heavy code; systematic debugging approach.
- Comfort using AI coding copilots to speed up scaffolding/tests/refactors — while validating correctness.
- Ownership mindset: cares about correctness, data integrity, and reliability.
Good to have
- Experience with rule engines, scoring systems, or audit-heavy domains (fintech, healthcare, compliance).
- Event schemas/telemetry pipelines and observability basics.
- Exposure to RAG/embeddings/vector DBs or prompt evaluation harnesses.
Location: Pune (on-site for first 3 months; hybrid/WFH flexibility thereafter)
Employment Type: Full-time
Experience: 2–3 years (correctness-first; strong learning velocity)
Compensation: Competitive (₹8–10 LPA fixed cash) + ESOP (equity ownership, founding-early employee level)
Joining Timeline: 2–3 weeks / Immediate
Why join (founding team)
- You’ll build core IP: scoring integrity and data foundations that everything else depends on.
- Rare skill-building: reliable systems + GenAI-safe context/evals (not just API calls).
- Meaningful ESOP upside at an early stage.
- High trust, high ownership, fast learning.
- High-impact mission: reduce confusion and conflict in student career decisions; help families make better choices, transform student lives by making great learning personal.
Hiring process (fast)
1. 20-min intro call (fit + expectations).
2. 45–60 min SQL & data modeling, API deep dive.
3. Practical exercise (2–3 hours max) implementing a small scoring service with tests.
4. Final conversation + offer.
How to apply
Reply with your resume/LinkedIn profile plus one example of a system/feature where you owned data modeling and backend integration (a short paragraph is fine).
Hiring for Data Enginner
Exp : 5 - 7 yrs
Work Location : Hyderbad Hybrid
Must Skills : Python, AWS Glue , PySpark, Terraform
About Company (GeniWay)
GeniWay Technologies is pioneering India’s first AI-native platform for personalized learning and career guidance, transforming the way students learn, grow, and determine their future path. Addressing challenges in the K-12 system such as one-size-fits-all teaching and limited career awareness, GeniWay leverages cutting-edge AI to create a tailored educational experience for every student. The core technology includes an AI-powered learning engine, a 24x7 multilingual virtual tutor and Clario, a psychometrics-backed career guidance system. Aligned with NEP 2020 policies, GeniWay is on a mission to make high-quality learning accessible to every student in India, regardless of their background or region.
What you’ll do
- Own and ship end-to-end product journeys (mobile-first): onboarding → assessment runner → results → career map → parent alignment.
- Build/maintain backend APIs and shared platform capabilities (auth, sessions, data contracts, telemetry).
- Integrate GenAI responsibly: prompt/versioning, eval harnesses, guardrails, fallbacks (AI is core, not a side feature).
- Set the engineering quality bar: code reviews, tests, CI/CD, release gating, observability, performance budgets.
- Mentor and lead a lean pod; grow into Lead Engineer responsibility within ~6 months based on delivery.
Must-have skills
- Strong TypeScript + React/Next.js (or equivalent) and proven experience shipping state-heavy UIs.
- Backend/API development (Node/NestJS or Python/FastAPI) with solid error handling and clean contracts.
- Good SQL fundamentals and hands-on PostgreSQL.
- Comfort using AI coding copilots (Copilot/Cursor) to accelerate scaffolding/tests/refactors — with rigorous verification.
- Startup mindset: ownership, ambiguity tolerance, and ability to ship weekly.
Good to have
- Hands-on GenAI product work: tool calling, RAG/embeddings, vector DBs (Qdrant/Pinecone), LangChain/LlamaIndex (or similar).
- Experience with conversational flows (WhatsApp or chat-like UX).
- DevOps/observability basics (logs/metrics/traces).
- Public proof of ownership: OSS, side projects, hackathons, shipped 0→1 features.
Location: Pune (on-site for first 3 months; hybrid/WFH flexibility thereafter)
Employment Type: Full-time Experience: 3–4 years (high ownership; leadership potential)
Compensation: Competitive (₹12–15 LPA fixed cash) + ESOP (equity ownership, founding-early employee level).
Standard benefits: Health insurance, paid leave, learning/training budget.
Joining Timeline: 2–3 weeks / Immediate
Why join (founding team)
- Meaningful ownership: ESOP at an early stage (real upside, not token equity).
- Career acceleration: scope and autonomy typically seen much later in larger orgs.
- AI-first engineering culture: copilots + LLM workflows across SDLC, with strong discipline on correctness and safety.
- High-impact mission: reduce confusion and conflict in student career decisions; help families make better choices, transform student lives by making great learning personal.
- Lean, high-trust team: direct access to founder + fast decisions; minimal bureaucracy.
Hiring process (fast)
- 20-min intro call (fit + expectations).
- 60–90 min technical deep dive (system design + trade-offs).
- Practical exercise (1–2 hours max) — focused and relevant (assessment flow or GenAI eval harness).
- Final conversation + offer.
How to apply
Reply with your resume/LinkedIn profile and 2 links (any of: GitHub, portfolio, shipped product, blog, or a short note describing a feature you owned end-to-end).
Job Brief
If you have passion for fixing bugs in code and programming, you may be cut out for career in manual testing. As a Software Tester, you will have a major role to play in the quality assurance stage of software development. You will carry out manual tests to ensure the software created meets the requirements. This involves the analysis of software to prevent issues and fixing bugs before the product is dispatched to users. As working with code is part of the role, software testers are expected to be familiar with various coding languages.
Job Responsibilities
As a Quality Analyst Engineer, you will need to:
- Analyzing the requirements
- Arranging Test Environment to execute the test cases
- Conducting Review Meetings
- Analyzing and executing Test Cases
- Defect Tracking
- Communicating with Test Manager
- Test Plans
- Reporting of bugs to developers so as to fix it
- Test Cases for various testing processes
- Summary Reports
- Test Data for test cases
- Lesson Learnt documents (based on testing inputs from previous project)
- Suggestion Documents (to improve software quality)
- Test Scenarios
Qualification
- Min 8-10 years of experience as a Software Tester or similar role
- Ability to handle multiple tasks simultaneously
- Ability to work in a fast-paced environment with minimal supervision
- Sense of ownership and pride in your performance and its impact on the company’s success
- Critical thinker and problem-solving skills
- Team player
- Good time-management skills
- Great interpersonal and communication skills
- Experience in an automated test environment
- QTP
- Test Management software
- User Acceptance Testing
- iOS testing frameworks
- Basic Programming skills
- QA Software tools
- SQL knowledge
- ALM
- Jira
Additional Details
Salary Range: Competitive
Employment Type: Full-time
Location: Delhi/ Mumbai
Required Skills and Qualifications:
- 2–3 years of professional experience in Python development.
- Strong understanding of object-oriented programming.
- Experience with frameworks such as Django, Flask, or FastAPI.
- Knowledge of REST APIs, JSON, and web integration.
- Familiarity with SQL and database management systems.
- Experience with Git or other version control tools.
- Good problem-solving and debugging skills.
- Strong communication and teamwork abilities.
What you’ll do
- Build and scale backend services and APIs using Python
- Work on cross-language integrations (Python ↔ PHP)
- Develop frontend features using React (Angular is a plus)
- Deploy, monitor, and manage applications on AWS
- Own features end-to-end: development, performance, and reliability
- Collaborate closely with product, QA, and engineering teams
Tech Stack
- Backend: Python (working knowledge of PHP is a strong plus)
- Frontend: React (Angular is a plus)
- Cloud: AWS
- Version Control: Git / GitHub
Experience
- 5–10 years of professional software development experience
- Strong hands-on experience with Python
- Hands-on experience deploying and managing applications on AWS
- Working knowledge of modern frontend frameworks
Position: Insights Manager
Location: Gurugram (Onsite)
Experience Required: 4+ Years
Working Days: 5 Days (Mon to Fri)
About the Role
We are seeking a hands-on Insights Manager to build the analytical backbone that powers decision-making. This role sits at the centre of the data ecosystem, partnering with Category, Commercial, Marketing, Sourcing, Fulfilment, Product, and Growth teams to translate data into insight, automation, and action.
You will design self-running reporting systems, maintain data quality in collaboration with data engineering, and build analytical models that directly improve pricing, customer experience, and operational efficiency. The role requires strong e-commerce domain understanding and the ability to move from data to decisions with speed and precision.
Key Responsibilities
1. Data Platform & Governance
- Partner with data engineering to ensure clean and reliable data across Shopify, GA4, Ad platforms, CRM, and ERP systems
- Define and maintain KPI frameworks (ATC, CVR, AOV, Repeat Rate, Refunds, LTV, CAC, etc.)
- Oversee pipeline monitoring, QA checks, and metric documentation
2. Reporting, Dashboards & Automation
- Build automated datamarts and dashboards for business teams
- Integrate APIs and automate data flows across multiple sources
- Create actionable visual stories and executive summaries
- Use AI and automation tools to improve insight delivery speed
3. Decision Models & Applied Analytics
- Build models for pricing, discounting, customer segmentation, inventory planning, delivery SLAs, and recommendations
- Translate analytics outputs into actionable playbooks for internal teams
4. Insights & Actionability
- Diagnose performance shifts and identify root causes
- Deliver weekly and monthly insight-driven recommendations
- Improve decision-making speed and quality across functions
Qualifications & Experience
- 4–7 years of experience in analytics or product insights (e-commerce / D2C / retail)
- Strong SQL and Python skills
- Hands-on experience with GA4, GTM, and dashboarding tools (Looker / Tableau / Power BI)
- Familiarity with CRM platforms like Klaviyo, WebEngage, or MoEngage
- Strong understanding of e-commerce KPIs and customer metrics
- Ability to communicate insights clearly to non-technical stakeholders
What We Offer
- Greenfield opportunity to build the data & insights platform from scratch
- High business impact across multiple functions
- End-to-end exposure from analytics to automation and applied modelling
- Fast-paced, transparent, and collaborative work culture
About Us:
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary:
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities:
- ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
- Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
- Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
- Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
- API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
- Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
- Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
- Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills:
- Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
- Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
- Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
- Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
- Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
- Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
- Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
- Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
- Experience with data validation techniques and tools.
- Familiarity with CI/CD practices and the ability to work in an Agile framework.
- Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
- Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
- Familiarity with similar large-scale public dataset integration initiatives.
- Experience with multilingual data integration.
Job Summary
We are looking for an experienced Python DBA with strong expertise in Python scripting and SQL/NoSQL databases. The candidate will be responsible for database administration, automation, performance optimization, and ensuring availability and reliability of database systems.
Key Responsibilities
- Administer and maintain SQL and NoSQL databases
- Develop Python scripts for database automation and monitoring
- Perform database performance tuning and query optimization
- Manage backups, recovery, replication, and high availability
- Ensure data security, integrity, and compliance
- Troubleshoot and resolve database-related issues
- Collaborate with development and infrastructure teams
- Monitor database health and performance
- Maintain documentation and best practices
Required Skills
- 10+ years of experience in Database Administration
- Strong proficiency in Python
- Experience with SQL databases (PostgreSQL, MySQL, Oracle, SQL Server)
- Experience with NoSQL databases (MongoDB, Cassandra, etc.)
- Strong understanding of indexing, schema design, and performance tuning
- Good analytical and problem-solving skills
We are seeking an experienced Engineering Leader to drive the design and delivery of secure, scalable, and high-performance financial platforms. This role requires strong technical leadership, people management skills, and deep understanding of FinTech systems, compliance, and reliability.
Key Responsibilities
- Lead multiple engineering teams delivering FinTech platforms (payments, lending, banking, wallets, trading, or risk systems)
- Own architecture and system design for high-availability, low-latency, secure systems
- Partner with Product, Compliance, Risk, and Business teams to translate financial requirements into technical solutions
- Ensure adherence to security standards, regulatory compliance (PCI-DSS, SOC2, ISO), and data privacy
- Drive best practices in coding, testing, DevOps, observability, and system resilience
- Build, mentor, and retain high-performing engineering teams
- Oversee sprint planning, delivery timelines, and stakeholder communication
- Lead incident response, root cause analysis, and platform stability improvements
Required Skills & Qualifications
- 4+ years in leadership roles
- Strong hands-on expertise in Java / Node.js / Python / .NET / Go
- Experience building FinTech platforms — payments, banking, lending, trading, or risk systems
- Deep knowledge of distributed systems, microservices, APIs, databases, and cloud (AWS/Azure/GCP)
- Strong understanding of security, fraud prevention, and regulatory compliance
- Experience working in Agile/Scrum environments
- Excellent stakeholder and people management skills
Key Responsibilities:
- Lead the architecture, design, and implementation of scalable, secure, and highly available AWS infrastructure leveraging services such as VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, and Secrets Manager.
- Develop and maintain reusable, modular IaC frameworks using Terraform and Terragrunt, and mentor team members on IaC best practices.
- Drive automation of infrastructure provisioning, deployment workflows, and routine operations through advanced Python scripting.
- Take ownership of cost optimization strategy by analyzing usage patterns, identifying savings opportunities, and implementing guardrails across multiple AWS environments.
- Define and enforce infrastructure governance, including secure access controls, encryption policies, and secret management mechanisms.
- Collaborate cross-functionally with development, QA, and operations teams to streamline and scale CI/CD pipelines for containerized microservices on Kubernetes (EKS).
- Establish monitoring, alerting, and observability practices to ensure platform health, resilience, and performance.
- Serve as a technical mentor and thought leader, guiding junior engineers and shaping cloud adoption and DevOps culture across the organization.
- Evaluate emerging technologies and tools, recommending improvements to enhance system performance, reliability, and developer productivity.
- Ensure infrastructure complies with security, regulatory, and operational standards, and drive initiatives around audit readiness and compliance.
Mandatory Skills & Experience:
- AWS (Advanced Expertise): VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, Secrets Management
- Infrastructure as Code: Extensive experience with Terraform and Terragrunt, including module design and IaC strategy
- Strong hold in Kubernetes
- Scripting & Automation: Proficient in Python, with a strong track record of building tools, automating workflows, and integrating cloud services
- Cloud Cost Optimization: Proven ability to analyze cloud spend and implement sustainable cost control strategies
- Leadership: Experience in leading DevOps/infrastructure teams or initiatives, mentoring engineers, and making architecture-level decisions
Nice to Have:
- Experience designing or managing CI/CD pipelines for Kubernetes-based environments
- Backend development background in Python (e.g., FastAPI, Flask)
- Familiarity with monitoring/observability tools such as Prometheus, Grafana, CloudWatch
- Understanding of system performance tuning, capacity planning, and scalability best practices
- Exposure to compliance standards such as SOC 2, HIPAA, or ISO 27001
Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.
The Role
We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.
This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.
You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.
What you’ll do
● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python
● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems
● Model data using dbt, with an emphasis on quality, reuse, and documentation
● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics
● Write high-performance SQL and support analytics and reporting teams in self-serve data access
● Monitor and improve data quality, lineage, and governance across critical workflows
● Collaborate with engineers, analysts, and business partners across the US, UK, and India
What You Bring
● 4+ years of data engineering experience, ideally in a global, distributed team
● Strong Python development skills and experience
● Expert in SQL for data transformation, analysis, and debugging
● Deep knowledge of Airflow and orchestration best practices
● Proficient in DBT (data modeling, testing, release workflows)
● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus
● Strong grasp of data governance, observability, and privacy standards
● Excellent written and verbal communication skills
Nice to have
● Experience working with digital marketing and performance data, including:
Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)
● Familiarity with BI tools like Tableau or Looker
● Exposure to attribution models, media mix modeling, or A/B testing infrastructure
● Collaboration experience with data scientists or machine learning workflows
Why Join Us
● Monthly long weekends — every third Friday off
● Wellness reimbursement to support your health and balance
● Paid parental leave
● Remote-first with flexibility and trust
● Work with a world-class data and marketing team inside a globally recognized brand
Company: Ethara AI
Location: Gurgaon (Work From Office)
Employment Type: Full-Time
Experience Required: 2–4 Years
Open Roles: Software Engineers (Python Fullstack)
About Us
Ethara AI is a leading AI and data services company in India, specializing in building high-quality, domain-specific datasets for Large Language Model (LLM) fine-tuning. Our work bridges the gap between academic learning and real world AI applications, and we are committed to nurturing the next generation of AI professionals.
Role Overview:-
We are looking for experienced Python Fullstack Software Engineers who can contribute to post training AI development workflows with strong proficiency in coding tasks and evaluation logic. This role involves working on high-impact AI infrastructure projects, including but not limited to:
Code generation, validation, and transformation across Python, Java, JavaScript, and modern frameworks;
Evaluation and improvement of model-generated code responses;
Designing and verifying web application features, APIs, and test cases used in AI model alignment;
Interpreting and executing task specifications to meet rigorous quality benchmarks;
Collaborating with internal teams to meet daily throughput and quality targets within a structured environment.
Key Responsibilities:-
Work on fullstack engineering tasks aligned with LLM post-training workflows;
Analyze model-generated outputs for correctness, coherence, and adherence to task requirements;
Write, review, and verify application logic and coding prompts across supported languages and frameworks;
Maintain consistency, quality, and efficiency in code-focused deliverables;
Engage with leads and PMs to meet productivity benchmarks (8–9 working hours daily);
Stay updated with AI development standards and contribute to refining internal engineering processes.
Technical Skills Required:-
Strong proficiency in Python and nice to have: Java, Node.js;
Strong experience in frontend technologies: React.js, HTML/CSS, TypeScript;
Familiarity with REST APIs, testing frameworks, and Git-based workflows;
Ability to analyze, debug, and rewrite logic for correctness and clarity;
Good understanding of model response evaluation and instruction-based coding logic
Qualifications:-
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;
2–4 years of experience in a software development role (Fullstack preferred);
Prior exposure to AI/LLM environments or code-based evaluation tasks is a plus;
Excellent written communication and logical reasoning abilities;
Comfortable working from office in Gurgaon and committing to 8–9 hours of productive work daily
Why Join Us
Be part of a high-growth team at the forefront of LLM post-training development;
Work on real-world AI engineering problems with production-grade impact;
Competitive compensation with performance-driven growth opportunities;
Structured workflow, collaborative culture, and technically challenging projects
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/ Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 4-8 years of experience in data engineering or data platform development
- Strong hands-on experience in SQL, Snowflake, Python, and Airflow
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Job Details
- Job Title: Java Full Stack Developer
- Industry: Global digital transformation solutions provider
- Domain: Information technology (IT)
- Experience Required: 5-7 years
- Working Mode: 3 days in office, Hybrid model.
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
SDET (Software Development Engineer in Test)
Job Responsibilities:
• Test Automation: • Develop, maintain, and execute automated test scripts using test automation frameworks. • Design and implement testing tools and frameworks to support automated testing.
• Software Development: • Participate in the design and development of software components to improve testability. • Write code actively, contribute to the development of tools, and work closely with developers to debunk complex issues.
• Quality Assurance: • Collaborate with the development team to understand software features and technical implementations. • Develop quality assurance standards and ensure adherence to the best testing practices.
• Integration Testing: • Conduct integration and functional testing to ensure that components work as expected individually and when combined.
• Performance and Scalability Testing: • Perform performance and scalability testing to identify bottlenecks and optimize application performance. • Test Planning and Execution: • Create detailed, comprehensive, and well-structured test plans and test cases. • Execute manual and/or automated tests and analyze results to ensure product quality.
• Bug Tracking and Resolution: • Identify, document, and track software defects using bug tracking tools. • Verify fixes and work closely with developers to resolve issues. • Continuous Improvement: • Stay updated on emerging tools and technologies relevant to the SDET role. • Constantly look for ways to improve testing processes and frameworks.
Skills and Qualifications: • Strong programming skills, particularly in languages such as COBOL, JCL, Java, C#, Python, or JavaScript. • Strong experience in Mainframe environments. • Experience with test automation tools and frameworks like Selenium, JUnit, TestNG, or Cucumber. • Excellent problem-solving skills and attention to detail. • Familiarity with CI/CD tools and practices, such as Jenkins, Git, Docker, etc. • Good understanding of web technologies and databases is often beneficial. • Strong communication skills for interfacing with cross-functional teams.
Qualifications • 5+ years of experience as a software developer, QA Engineer, or SDET. • 5+ years of hands-on experience with Java or Selenium. • 5+ years of hands-on experience with Mainframe environments. • 4+ years designing, implementing, and running test cases. • 4+ years working with test processes, methodologies, tools, and technology. • 4+ years performing functional and UI testing, quality reporting. • 3+ years of technical QA management experience leading on and offshore resources. • Passion around driving best practices in the testing space. • Thorough understanding of Functional, Stress, Performance, various forms of regression testing and mobile testing. • Knowledge of software engineering practices and agile approaches. • Experience building or improving test automation frameworks. • Proficiency CICD integration and pipeline development in Jenkins, Spinnaker or other similar tools. • Proficiency in UI automation (Serenity/Selenium, Robot, Watir). • Experience in Gherkin (BDD /TDD). • Ability to quickly tackle and diagnose issues within the quality assurance environment and communicate that knowledge to a varied audience of technical and non-technical partners. • Strong desire for establishing and improving product quality. • Willingness to take challenges head on while being part of a team. • Ability to work under tight deadlines and within a team environment. • Experience in test automation using UFT and Selenium. • UFT/Selenium experience in building object repositories, standard & custom checkpoints, parameterization, reusable functions, recovery scenarios, descriptive programming and API testing. • Knowledge of VBScript, C#, Java, HTML, and SQL. • Experience using GIT or other Version Control Systems. • Experience developing, supporting, and/or testing web applications. • Understanding of the need for testing of security requirements. • Ability to understand API – JSON and XML formats with experience using API testing tools like Postman, Swagger or SoapUI. • Excellent communication, collaboration, reporting, analytical and problem-solving skills. • Solid understanding of Release Cycle and QA /testing methodologies • ISTQB certification is a plus.
Skills: Python, Mainframe, C#
Notice period - 0 to 15days only
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Key Responsibilities
• Design and architect robust, scalable, and secure software solutions.
• Providing technical advice by evaluating new technologies and products to determine feasibility and desirability of the current business environment and to detect critical deficiencies and recommend
solutions.
• Supervising and reviewing technology diagnosis and assessment activities.
• Working with the project managers to define the scope and cost estimation.
• Collaborate closely with product managers, developers, and stakeholders to align technical solutions with business objectives.
• Provide leadership and mentorship to the development team, fostering a culture of innovation and excellence.
• Evaluate and recommend tools, technologies, and frameworks to optimize product performance and development.
• Oversee the end-to-end technical implementation of projects, ensuring high-quality deliverables within defined timelines.
• Establish the best practices for software development, deployment, and maintenance.
• Stay updated on emerging trends in software architecture and maritime technology to integrate industry best practices.
Required Skills and Qualifications
• Bachelor’s or master’s degree in computer science, Engineering, or a related field.
• Proven experience (8+ years) in software architecture and design with a focus on Python
• Strong proficiency in web-based application development and cloud computing technologies.
• Expertise in modern architecture frameworks, microservices, and RESTful API design.
• Excellent communication and interpersonal skills, with the ability to convey technical concepts to
non-technical stakeholders.
• Strong problem-solving skills and a proactive approach to addressing challenges.
About the Role
We're seeking a Python Backend Developer to join our insurtech analytics team. This role focuses on developing backend APIs, automating insurance reporting processes, and supporting data analysis tools. You'll work with insurance data, build REST APIs, and help streamline operational workflows through automation.
Key Responsibilities
- Automate insurance reporting processes including bordereaux, reconciliations, and data extraction from various file formats
- Support and maintain interactive dashboards and reporting tools for business stakeholders
- Develop Python scripts and applications for data processing, validation, and transformation
- Develop and maintain backend APIs using FastAPI or Flask
- Perform data analysis and generate insights from insurance datasets
- Automate recurring analytical and reporting tasks
- Work with SQL databases to query, analyze, and extract data
- Collaborate with business users to understand requirements and deliver solutions
- Document code, processes, and create user guides for dashboards and tools
- Support data quality initiatives and implement validation checks
Requirements
Essential
- 2+ years of Python development experience
- Strong knowledge of Python libraries: Pandas, NumPy for data manipulation
- Experience building web applications or dashboards with Python frameworks
- Knowledge of FastAPI or Flask for building backend APIs and applications
- Proficiency in SQL and working with relational databases
- Experience with data visualization libraries (Matplotlib, Plotly, Seaborn)
- Ability to work with Excel, CSV, and other data file formats
- Strong problem-solving and analytical thinking skills
- Good communication skills to work with non-technical stakeholders
Desirable
- Experience in insurance or financial services industry
- Familiarity with insurance reporting processes (bordereaux, reconciliations, claims data)
- Experience with Azure cloud services (Azure Functions, Blob Storage, SQL Database)
- Experience with version control systems (Git, GitHub, Azure DevOps)
- Experience with API development and RESTful services
Tech Stack
Python 3.x, FastAPI, Flask, Pandas, NumPy, Plotly, Matplotlib, SQL Server, MS Azure, Git, Azure DevOps, REST APIs, Excel/CSV processing libraries
Hope you are doing great!
We have an Urgent opening for a Senior Automation QA professional to join a global life sciences data platform company. Immediate interview slots available.
🔹 Quick Role Overview
- Role: Senior Automation QA
- Location: Pune(Hybrid -3 days work from office)
- Employment Type: Full-Time
- Experience Required: 5+ Years
- Interview Process: 2–3 Rounds
- Qualification: B.E / B.Tech
- Notice Period : 0-30 Days
📌 Job Description
IntegriChain is the data and business process platform for life sciences manufacturers, delivering visibility into patient access, affordability, and adherence. The platform enables manufacturers to drive gross-to-net savings, ensure channel integrity, and improve patient outcomes.
We are expanding our Engineering team to strengthen our ability to process large volumes of healthcare and pharmaceutical data at enterprise scale.
The Senior Automation QA will be responsible for ensuring software quality by designing, developing, and maintaining automated test frameworks. This role involves close collaboration with engineering and product teams, ownership of test strategy, mentoring junior QA engineers, and driving best practices to improve product reliability and release efficiency.
🎯 Key Responsibilities
- Hands-on QA across UI, API, and Database testing – both Automation & Manual
- Analyze requirements, user stories, and technical documents to design detailed test cases and test data
- Design, build, execute, and maintain automation scripts using BDD (Gherkin), Pytest, and Playwright
- Own and maintain QA artifacts: Test Strategy, BRD, defect metrics, leakage reports, quality dashboards
- Work with stakeholders to review and improve testing approaches using data-backed quality metrics
- Ensure maximum feasible automation coverage in every sprint
- Perform functional, integration, and regression testing in Agile & DevOps environments
- Drive Shift-left testing, identifying defects early and ensuring faster closures
- Contribute to enhancing automation frameworks with minimal guidance
- Lead and mentor a QA team (up to 5 members)
- Support continuous improvement initiatives and institutionalize QA best practices
- Act as a problem-solver and strong team collaborator in a fast-paced environment
🧩 Desired Skills & Competencies
✅ Must-Have:
- 5+ years of experience in test planning, test case design, test data preparation, automation & manual testing
- 3+ years of strong UI & API automation experience using Playwright with Python
- Solid experience in BDD frameworks (Gherkin, Pytest)
- Strong database testing skills (Postgres / Snowflake / MySQL / RDS)
- Hands-on experience with Git and Jenkins (DevOps exposure)
- Working experience with JMeter
- Experience in Agile methodologies (Scrum / Kanban)
- Excellent problem-solving, analytical, communication, and stakeholder management skills
👍 Good to Have:
- Experience testing AWS / Cloud-hosted applications
- Exposure to ETL processes and BI reporting systems
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
About the Role
We’re hiring a Full Stack Engineer who can own features end to end, from UI to APIs to data models.
This is not a “ticket executor” role. You’ll work directly with product, AI, and founders to shape how users interact with intelligent financial systems.
If you enjoy shipping real features, fixing real problems, and seeing users actually use what you built, this role is for you.
What You Will Do
- Build and ship frontend features using React, Next.js, and React Native
- Develop backend services and APIs using Python and/or Golang
- Own end-to-end product flows like onboarding, dashboards, insights, and AI conversations
- Integrate frontend with backend and AI services (LLMs, tools, data pipelines)
- Design and maintain PostgreSQL schemas, queries, and migrations
- Ensure performance, reliability, and clean architecture across the stack
- Collaborate closely with product, AI, and design to ship fast and iterate
- Debug production issues and continuously improve UX and system quality
What We’re Looking For
- 2 to 3+ years of professional full stack engineering experience
- Strong hands-on experience with React, Next.js, and React Native
- Backend experience with Python and/or Golang in production
- Solid understanding of PostgreSQL, APIs, and system design
- Strong fundamentals in HTML, CSS, TypeScript, and modern frontend patterns
- Ability to work independently and take ownership in a startup environment
- Product-minded engineer who thinks in terms of user outcomes, not just code
- B.Tech in Computer Science or related field
Nice to Have
- Experience with fintech, dashboards, or data-heavy products
- Exposure to AI-powered interfaces, chat systems, or real-time data
- Familiarity with cloud platforms like AWS or GCP
- Experience handling sensitive or regulated data
Why Join Alpheva AI
- Build real product used by real users from day one
- Work directly with founders and influence core product decisions
- Learn how AI-native fintech products are built end to end
- High ownership, fast execution, zero corporate nonsense
- Competitive compensation with meaningful growth upside
Employment Type: Full-time, Permanent
Location: Near Bommasandra Metro Station, Bangalore (Work from Office – 5 days/week)
Notice Period: 15 days or less preferred
About the Company:
SimStar Asia Ltd is a joint vendor of the SimGems and StarGems Group — a Hong Kong–based multinational organization engaged in the global business of conflict-free, high-value diamonds.
SimStar maintains the highest standards of integrity. Any candidate found engaging in unfair practices at any stage of the interview process will be disqualified and blacklisted.
Experience Required
- 4+ years of relevant professional experience.
Key Responsibilities
- Hands-on backend development using Python (mandatory).
- Write optimized and complex SQL queries; perform query tuning and performance optimization.
- Work extensively with the Odoo framework, including development and deployment.
- Manage deployments using Docker and/or Kubernetes.
- Develop frontend components using OWL.js or any modern JavaScript framework.
- Design scalable systems with a strong foundation in Data Structures, Algorithms, and System Design.
- Handle API integrations and data exchange between systems.
- Participate in technical discussions and architecture decisions.
Interview Expectations
- Candidates must be comfortable writing live code during interviews.
- SQL queries and optimization scenarios will be part of the technical assessment.
Must-Have Skills
- Python backend development
- Advanced SQL
- Odoo Framework & Deployment
- Docker / Kubernetes
- JavaScript frontend (OWL.js preferred)
- System Design fundamentals
- API integration experience
About us
Cere Labs is a Mumbai based company working in the field of Artificial Intelligence. It is a product company that utilizes the latest technologies such as Python, Redis, neo4j, MVC, Docker, Kubernetes to build its AI platform. Cere Labs’ clients are primarily from the Banking and Finance domain in India and US. The company has a great environment for its employees to learn and grow in technology.
Software Developer
Job brief
Cere Labs is seeking to hire a skilled and passionate software developer to help with the development of our current projects and product. Your duties will primarily revolve around building software by writing code, as well as modifying software to fix errors, improve its performance. You will also be involved in writing of the test cases and testing
To be successful in this role, you will need extensive knowledge of programming languages like Java, Python, Java Script, React.
Ultimately, the role of the Software Engineer is to build high-quality, innovative and fully performing software that complies with coding standards and technical design
Responsibilities
- Develop flowcharts, layouts and documentation to identify requirements and solutions
- Write well-designed, testable code
- Develop software verification plans and quality assurance procedures
- Document and maintain software functionality
- Troubleshoot, debug and upgrade existing systems
- Deploy programs and test the deployed code
- Comply with project plans and industry standards
Requirements
- BE (CS/IT) degree in Computer Science
- Ability to understand the requirements given and generate the design based on specification given.
- Ability to develop unit testing of code components or complete applications.
- Must be a full-stack developer and understand concepts of software engineering.
- Ability to develop software in Python, Java, Java Script
- Excellent knowledge of relational databases, MySQL and ORM technologies (JPA2, Hibernate), in-memory data stores such as Redis
- Experience developing web applications using at least one popular web framework (JSF, Spring MVC, React) is preferred
- Experience with test-driven development
- Proficiency in software engineering tools including popular IDE’s such as PyCharm, Visual Studio Code and Eclipse
- Proven work experience as a Software Engineer or Software Developer will be an added advantage
Working conditions
Hours: 9:00 AM to 6:00 PM
Weekly off: Sunday, First and Third Saturdays
Mode: Work from office
Recruitment process
The selection process includes:
- Written test
- Technical interview
- Final interview
Compensation
CTC: Rs. 3-4 lacs pa, depending on performance in the selection process.
About Snabbit: Snabbit is India’s first Quick-Service App, delivering home services in just 15 minutes through a hyperlocal network of trained and verified professionals. Backed by Nexus Venture Partners (investors in Zepto, Unacademy, and Ultrahuman), Snabbit is redefining convenience in home services with quality and speed at its core. Founded by Aayush Agarwal, former Chief of Staff at Zepto, Snabbit is pioneering the Quick-Commerce revolution in services. In a short period, we’ve completed thousands of jobs with unmatched customer satisfaction and are scaling rapidly.
At Snabbit, we don’t just build products—we craft solutions that transform everyday lives. This is a playground for engineers who love solving complex problems, building systems from the ground up, and working in a fast-paced, ownership-driven environment. You’ll work alongside some of the brightest minds, pushing boundaries and creating meaningful impact at scale.
Responsibilities: ● Design, implement, and maintain backend services and APIs
● Develop and architect complex UI features for iOS and Android apps using Flutter
● Write high-quality, efficient, and maintainable code, adhering to industry best practices.
● Participate in design discussions to develop scalable solutions and implement them.
● Take ownership of feature delivery timelines and coordinate with cross-functional teams
● Troubleshoot and debug issues to ensure smooth system operations. ● Design, develop, and own end-to-end features for in-house software and tools
● Optimize application performance and implement best practices for mobile development
● Deploy and maintain services infrastructure on AWS. Requirements: ● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.
● Experience: ○ 3-5 years of hands-on experience as a full-stack developer.
○ Expertise in developing backend services and mobile applications.
○ Experience in leading small technical projects or features
○ Proven track record of delivering complex mobile applications to production
● Technical Skills:
○ Strong knowledge of data structures, algorithms, and design patterns. ○ Proficiency in Python and Advanced proficiency in Flutter with deep understanding of widget lifecycle and state management
○ Proficiency in RESTful APIs and microservices architecture ○ Knowledge of mobile app deployment processes and app store guidelines
○ Familiarity with version control systems (Git) and agile development methodologies
○ Experience with AWS or other relevant cloud technologies
○ Experience with databases (SQL, NoSQL) and data modeling
● Soft Skills:
○ Strong problem-solving and debugging abilities with ability to handle complex technical challenges and drive best practices within the team
○ Leadership qualities with ability to mentor and guide junior developers ○ Strong stakeholder management and client communication skills
○ A passion for learning and staying updated with technology trends.
About Voiceoc
Voiceoc is a Delhi based health tech startup which was started with a vision to help healthcare companies round the globe by leveraging Voice & Text AI. We started our operations in August 2020 and today, the leading healthcare companies of US, India, Middle East & Africa leverage Voiceoc as a channel to communicate with thousands of patients on a daily basis.
Website: https://www.voiceoc.com/
Responsibilities Include (but not limited to):
We’re looking for a hands-on Chief Technology Officer (CTO) to lead all technology initiatives for Voiceoc’s US business.
This role is ideal for someone who combines strong engineering leadership with deep AI product-building experience — someone who can code, lead, and innovate at the same time.
The CTO will manage the engineering team, guide AI development, interface with clients for technical requirements, and ensure scalable, reliable delivery of all Voiceoc platforms.
Technical Leadership
- Own end-to-end architecture, development, and deployment of Voiceoc’s AI-driven Voice & Text platforms.
- Work closely with the Founder to define the technology roadmap, ensuring alignment with business priorities and client needs.
- Oversee AI/ML feature development — including LLM integrations, automation workflows, and backend systems.
- Ensure system scalability, data security, uptime, and performance across all active deployments (US Projects).
- Collaborate with the AI/ML engineers to guide RAG pipelines, voicebot logic, and LLM prompt optimization.
Hands-On Contribution
- Actively contribute to the core codebase (preferably Python/FastAPI/Node).
- Lead by example in code reviews, architecture design, and debugging.
- Experiment with LLM frameworks (OpenAI, Gemini, Mistral, etc.) and explore their applications in healthcare automation.
Product & Delivery Management
- Translate client requirements into clear technical specifications and deliverables.
- Oversee product versioning, release management, QA, and DevOps pipelines.
- Collaborate with client success and operations teams to handle technical escalations, performance issues, and integration requests.
- Drive AI feature innovation — identify opportunities for automation, personalization, and predictive insights.
Team Management
- Manage and mentor an 8–10 member engineering team.
- Conduct weekly sprint reviews, define coding standards, and ensure timely, high-quality delivery.
- Hire and train new engineers to expand Voiceoc’s technical capability.
- Foster a culture of accountability, speed, and innovation.
Client-Facing & Operational Ownership
- Join client calls (US-based hospitals) to understand technical requirements or resolve issues directly.
- Collaborate with the founder on technical presentations and proof-of-concept discussions.
- Handle A–Z of tech operations for the US business — infrastructure, integrations, uptime, and client satisfaction.
Technical Requirements
Must-Have:
- 5-7 years of experience in software engineering with at least 2+ years in a leadership capacity.
- Strong proficiency in Python (FastAPI, Flask, or Django).
- Experience integrating OpenAI / Gemini / Mistral / Whisper / LangChain.
- Solid experience with AI/ML model integration, LLMs, and RAG pipelines.
- Proven expertise in cloud deployment (AWS / GCP), Docker, and CI/CD.
- Strong understanding of backend architecture, API integrations, and system design.
- Experience building scalable, production-grade SaaS or conversational AI systems.
- Excellent communication and leadership skills — capable of interfacing with both engineers and clients.
Good to Have (Optional):
- Familiarity with telephony & voice tech stacks (Twilio, Exotel, Asterisk etc.).
What We Offer
- Opportunity to lead the entire technology vertical for a growing global healthtech startup.
- Direct collaboration with the Founder/CEO on strategy and innovation.
- Competitive compensation — salary + meaningful equity stake.
- Dynamic and fast-paced work culture with tangible impact on global healthcare.
Other Details
- Work Mode: Hybrid - Noida (Office) + Home
- Work Timing: US Hours
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
Unilog’ s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Designation:- AI Architect
Location: Bangalore/Mysore/Remote
Job Type: Full-time
Department: Software R&D
About the Role
We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.
As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation.
Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques.
Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 8-14 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About our Benefits:
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
At Shipthis, we work to build a better future and make meaningful changes in the freight forwarding industry. Our team members aren't just employees. We are comprised of bright, skilled professionals with a single straightforward goal – to Evolve Freight forwarders towards
Digitalized operations, enhancing efficiency, and driving lasting change.
As a company, we're just the right size for every person to take initiative and make things happen. Join us in reshaping the future of logistics and be part of a journey where your contributions make a tangible difference.
Learn more at www.shipthis.co
Job Description
Who are we looking for?
We are seeking a skilled Developer who is experienced in Python with E2E project implementation to join our team.
What will you be doing?
- Design and develop backend services for the ERP system using Python and MongoDB
- Collaborate with the frontend development team to integrate the frontend and backend functionalities
- Develop and maintain APIs that are efficient, scalable, and secure
- Write efficient and reusable code that can be easily maintained and updated
- Optimize backend services to improve performance and scalability
- Troubleshoot and resolve backend issues and bugs
Desired qualifications include
- Bachelor’s degree in computer science or a related field
- Proven experience in Python Fast API with E2E project implementation
- Proficiency with DevOps and Pipelines (Git actions, Google Cloud Platform)
- Knowledge of microservices architecture
- Experience in MongoDB development, including Aggregation
- Proficiency in RESTful API development
- Experience with the Git version control system
- Strong problem-solving and analytical skills
- Ability to work in a fast-paced environment
We welcome candidates
- Who is an Immediate Joiner
- Female candidates returning to work after a career break are strongly encouraged to apply
- Whether you're seasoned or just starting out, if you have the skills and passion, we invite you to apply.
We are an equal-opportunity employer and are committed to fostering diversity and inclusivity. We do not discriminate based on race, religion, color, gender, sexual orientation, age, marital status, or disability status
JOB SYNOPSIS
- Location: Bangalore
- Job Type: Full-time
- Role: Software Developer
- Industry Type: Software Product
- Functional Area: Software Development
- Employment Type: Full-Time, Permanent
Strong Head of Engineering / Engineering Director / VP Engineering Profiles.
Mandatory (Experience 1): Must have minimum 15+ years of overall experience in software engineering roles
Mandatory (Experience 2): Must have minimum 8+ years of experience in senior engineering leadership roles (Head of Engineering, Director, Engineering Manager, or equivalent)
Mandatory (Experience 3): Must have strong hands-on proficiency in .NET, Python, and Node.js, with the ability to guide teams on coding standards and best practices
Mandatory (Experience 4): Must have solid experience working with relational and NoSQL databases, specifically PostgreSQL and MongoDB
Mandatory (Experience 5): Must have strong understanding of system architecture principles, scalable system design, and hands-on experience with design patterns such as DDD and CQRS
Mandatory (Experience 6): Must have hands-on experience building and scaling cloud-native microservices, preferably on Azure, with strong exposure to Docker and Kubernetes
Mandatory (Experience 7): Must have proven experience building, mentoring, and scaling high-performing engineering teams across ERP, POS, and Integration platforms
Mandatory (Experience 8): Must have ownership of end-to-end product delivery, including planning, execution, release management, and production readiness for SaaS
Mandatory (Note): Candidate should be from Kolkata or neayby states, Do ensure their willingness to relocate if they are not from Kolkata
Preferred
About Sun King
Sun King is the world’s leading off-grid solar energy company, delivering energy access to 1.8 billion people without reliable grid connections through innovative product design, fintech solutions, and field operations.
Key highlights:
- Connected over 20 million homes to solar power across Africa and Asia, adding 200,000 homes monthly.
- Affordable ‘pay-as-you-go’ financing model; after 1-2 years, customers own their solar equipment.
- Saved customers over $4 billion to date.
- Collect 650,000 daily payments via 28,000 field agents using mobile money systems.
- Products range from home lighting to high-energy appliances, with expansion into clean cooking, electric mobility, and entertainment.
With 2,800 staff across 12 countries, our team includes experts in various fields, all passionate about serving off-grid communities.
Diversity Commitment:
44% of our workforce are women, reflecting our commitment to gender diversity.
About the role:
Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments.
What you would be expected to do:
- Work with engineering, automation, and data teams to work on various infrastructure requirements.
- Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform.
- Managing AWS services for multiple teams.
- Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services.
- Deployment and management of Kubernetes resources.
- Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution.
- Set up incident response services and design effective processes.
- Deployment and management of critical platform services like OPA and Keycloak for IAM.
- Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines.
You might be a strong candidate if you have/are:
- Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks.
- Experience working with web servers (nginx, apache) and cloud providers (preferably AWS).
- Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments.
- Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters).
- Knowledge of web architecture, distributed systems, and single points of failure.
- Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck.
- Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls.
Good to have:
- Experience with backend development and setting up databases and performance tuning using parameter groups.
- Working experience in Kubernetes cluster administration and Kubernetes deployments.
- Experience working alongside sec ops engineers.
- Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing.
- Setup and usage of open telemetry, central logging, and monitoring systems.
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry.
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
Position: Automation Test Engineer (0-1 year)
Location: Mumbai
Company: Big Rattle Technologies Private Limited
Immediate Joiners only
Job Summary:
We are looking for a motivated and detail-oriented QA Automation Engineer (Fresher) who is eager to learn and grow in software testing and automation. The candidate will work under the guidance of senior QA engineers to design, execute, and maintain test cases for custom software and products developed within the organisation.
Key Responsibilities:
● Testing & Quality Assurance:
○ Understand business and technical requirements with guidance from senior team members
○ Design and execute manual test cases for web and mobile applications
○ Assist in identifying, documenting, and tracking defects using bug-tracking tools
● Automation Testing:
○ Learn to design, develop, and maintain basic automation test scripts for Web applications, Mobile applications, APIs
○ Execute automated test suites and analyze test results
○ Support regression testing activities during releases
● Collaboration & Documentation:
○ Work closely with developers, QA leads, and product teams to understand features and acceptance criteria
○ Prepare and maintain test documentation for Test cases, Test execution reports and Basic automation documentation
Required Skills:
● Testing Fundamentals
○ Understanding of SDLC, STLC, and basic Agile concepts
○ Knowledge of different testing types - Manual, Functional & Regression testing
● Automation & Tools (Exposure is sufficient)
○ Awareness or hands-on practice with automation tools such as Selenium / Cypress / Playwright and TestNG / JUnit / PyTest
○ Basic understanding of Mobile automation concepts (Appium – optional) API testing using tools like Postman
● Technical Skills
○ Basic programming knowledge in Java / Python / JavaScript
○ Understanding of SQL queries for basic data validation
● Tools & Reporting
○ Familiarity with bug-tracking or test management tools (e.g., JIRA/ Zoho Sprints)
○ Ability to prepare simple test execution reports and defect summaries
● Soft Skills:
○ Good communication and interpersonal skills
○ Strong attention to detail and quality mindset
Good to Have (Not Mandatory):
● Academic or personal project experience in automation testing
● Awareness of Performance testing tools (JMeter – basic understanding) and Security testing concepts (VAPT – theoretical knowledge is sufficient)
● ISTQB Foundation Level certification (or willingness to pursue)
Required Qualification:
● Bachelor’s degree in Computer Science, Engineering, or a related field
● Freshers or candidates with up to 1 year of experience in Software Testing
● Strong understanding of SDLC, STLC, and Agile methodologies.
● Solid foundation in testing principles, and eagerness to build a career in QA Automation.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients. Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
● Opportunity to work on diverse projects for Fortune 500 clients.
● Competitive salary and performance-based growth.
● Dynamic, collaborative, and growth-oriented work environment.
● Direct impact on product quality and client satisfaction.
● 5-day hybrid work week.
● Certification reimbursement.
● Healthcare coverage.
Role Summary
We are looking for a seasoned Python/Django expert with 10–12 years of real-world development experience and a strong background in leading engineering teams. The selected candidate will be responsible for managing complex technical initiatives, mentoring team members, ensuring best coding practices, and partnering closely with cross-functional teams. This position demands deep technical proficiency, strong leadership capability, and exceptional communication skills.
Primary Responsibilities
· Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.
· Architect, design, and deliver secure, scalable, and high-performing web applications.
· Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.
· Ensure compliance with coding standards, architectural patterns, and established development best practices.
· Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.
· Perform detailed code reviews, optimize system performance, and resolve production-level issues.
· Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.
· Create and maintain technical documentation while providing regular updates to leadership and stakeholders.
Required Skills & Qualifications Negotiable
· 10–12 years of professional experience in software development with strong expertise in Python and Django.
· Solid understanding of key web technologies, including REST APIs, HTML, CSS, and JavaScript.
· Hands-on experience working with relational and NoSQL databases (such as PostgreSQL, MySQL, or MongoDB).
· Familiarity with major cloud platforms (AWS, Azure, or GCP) and container tools like Docker and Kubernetes is a plus.
· Proficient in Git workflows, CI/CD pipelines, and automated testing tools.
· Strong analytical and problem-solving skills, especially in designing scalable and high-availability systems.
· Excellent communication skills—both written and verbal.
· Demonstrated leadership experience in mentoring teams and managing technical deliverables.
· Must be available to work on-site in the Hyderabad office; remote work is not allowed.
Preferred Qualifications
· Experience with microservices, asynchronous frameworks (such as FastAPI or Celery), or event-driven architectures.
· Familiarity with Agile/Scrum methodologies.
· Previous background as a technical lead or engineering manager.

is at the cutting-edge of AI, Psychology and large-scale data. We believe that we have an opportunity (and even a responsibility) to personalize and humanize how people interact over the internet; and
10+ years of experience in successfully building,
deploying, and running complex, large-scale web or data products.
● Proven Management Experience: Demonstrated success
managing a team of 5+ engineers for at least 2 years (managing
timelines, performance, and hiring). You know how to transition a
team from 'startup chaos' to 'structured agility'.
● Full-stack Authority: Deep expertise with Javascript, Node.js,
MySQL, and Python. You must have world-class expertise in at least
one area but possess a solid understanding of the entire stack in a
multi-tier environment.
● Architectural Track Record: Has built at least two
professional-grade products as the tech owner/architect and led the
delivery of complex products from conception to release.
● Experience in working with REST APIs, Machine Learning,
Algorithms & AWS.
● Familiar with visualization libraries and database technologies.
● Your reputation in the technology community within your domain.
● Your participation and success in competitive programming.
● Work on unusual/extraordinary hobby projects during school/college
that were not a part of the curriculum.
● The school that you come from and organizations where you have
worked earlier.
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

Client is at the cutting-edge of AI, Psychology and large-scale data. We believe that we have an opportunity (and even a responsibility) to personalize and humanize how people interact over the internet; and an opportunity to inspire far more trustworthy relationships online than it has ever been possible before.
10+ years of experience in successfully building,
deploying, and running complex, large-scale web or data products.
● Proven Management Experience: Demonstrated success
managing a team of 5+ engineers for at least 2 years (managing
timelines, performance, and hiring). You know how to transition a
team from 'startup chaos' to 'structured agility'.
● Full-stack Authority: Deep expertise with Javascript, Node.js,
MySQL, and Python. You must have world-class expertise in at least
one area but possess a solid understanding of the entire stack in a
multi-tier environment.
● Architectural Track Record: Has built at least two
professional-grade products as the tech owner/architect and led the
delivery of complex products from conception to release.
● Experience in working with REST APIs, Machine Learning,
Algorithms & AWS.
● Familiar with visualization libraries and database technologies.
● Your reputation in the technology community within your domain.
● Your participation and success in competitive programming.
● Work on unusual/extraordinary hobby projects during school/college
that were not a part of the curriculum.
● The school that you come from and organizations where you have
worked earlier.
Technical Trainer at the Pollachi location.
Trainer - Pollachi.
Willing to travel around a 30km radius from Pollachi.
Job Description: Technical Trainer
Expertise: HTML, CSS, JavaScript, Python, Artificial Intelligence (AI), and Machine Learning (ML), IoT, and Robotics (Optional).
Work Location: Flexible (Work from Home & Office available)
Target Audience: School students and teachers
Employment Type: Full-time, IoT and Robotics (Optional).
Key Responsibilities:
* Develop and deliver content in an easy-to-understand format suitable for varying audience levels.
* Prepare training materials, exercises, and assessments to evaluate participant progress and measure their learning outcomes. Adapt teaching methods to suit both in-person (office) and virtual (work-from-home) formats.
* Stay updated with the latest trends and tools in technology to ensure high-quality training delivery.
About Autonomize AI
Autonomize AI is on a mission to help organizations make sense of the world's data. We help organizations harness the full potential of data to unlock business outcomes. Unstructured dark data contains nuggets of information that when paired with human context will unlock some of the most impactful insights for most organizations, and it’s our goal to make that process effortless and accessible.
We are an ambitious team committed to human-machine collaboration. Our founders are serial entrepreneurs passionate about data and AI and have started and scaled several companies to successful exits. We are a global, remote company with expertise in building amazing data products, captivating human experiences, disrupting industries, being ridiculously funny, and of course scaling AI
The Opportunity
As a Senior Machine Learning Engineer at Autonomize, you will lead the development and deployment of machine learning solutions with an emphasis on large language models (LLMs), vision models, and classic NLP (Natural Language Processing) models. The ideal candidate will have a proven track record in these areas, particularly within healthcare contexts, and will play a significant role in advancing our AI-driven healthcare optimized AI Copilots and Agents..
What You’ll Do
- Help fine-tune or prompt engineer large language models (LLMs) for various healthcare applications across various customer engagements.
- Develop and refine our approach to handling vision based data using state-of-the-art VLM based models capable of processing and analyzing medical documents, healthcare forms in various formats and other visual data accurately.
- Create and enhance classic NLP models to understand and generate human language in healthcare settings, supporting clinical documentation, and patient interaction.
- Collaborate with multi-disciplinary teams including data scientists,ml engineers, healthcare clients, and product managers to deliver robust solutions.
- Ensure models are efficiently deployed and integrated into healthcare systems, maintaining high performance and scalability.
- Mentor and provide guidance to junior engineers and data scientists, fostering a culture of continuous learning and innovation.
- Conduct rigorous testing, validation, and tuning of models to ensure accuracy, reliability, and compliance with healthcare standards.
- Deep understanding of various training techniques including distributed training on GPUs and TPUs.
- Stay informed on the latest research, tools, and technologies in machine learning, particularly those applicable to language and vision processing in healthcare.
- Document methodologies, model architectures, and project outcomes effectively for both technical and non-technical audiences.
You’re a Fit If You Have
- Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field.
- 5-7 years of experience in machine learning engineering, with a significant track record in the developing production grade models and model pipelines in a regulated industry such as healthcare.
- Hands-on expertise in working with large language models (e.g., GPT, BERT), computer vision models, and classic NLP technologies.
- Proficient in programming languages such as Python, with extensive experience in ML libraries/frameworks like TensorFlow, PyTorch, OpenCV, etc.
- Strong understanding of deep learning techniques, model fine-tuning, hyper parameter optimization, and model optimization
- Proven experience in deploying and managing ML models in production environments.
- Excellent analytical skills, with a problem-solving mindset and the ability to think strategically.
- Strong communication skills for articulating complex concepts to diverse audiences.
- Working knowledge or experience in MLOps and LLMOps using tools like mlflow, kubeflow
- Working knowledge of basic software engineering principles and best practices
- Demonstrated working knowledge and experience on classic ML techniques and frameworks.
- Nice to have : Knowledge of Cloud vendor based ML Platforms such as Azure ML, Sagemaker
Bonus Points:
- Owner mentality - For you, the buck stops at you, You own it, you will learn it, and you will get it done
- You are naturally curious. Always experimenting than hypothesising - You like to push boundaries, you figure things out and experiment your way through any problem
- You are passionate, unafraid & loyal to the team & mission
- You love to learn & win together
- You communicate well through voice, writing, chat or video, and work well with a remote/global team
- Large/Complex organization experience in deploying NLP/ML in production
- Experience in efficiently scaling ML model training and inferencing
- Experience with Big Data technologies using Kafka, Spark, Hadoop, Snowflake
What we offer:
- Influence & Impact: Lead and shape the future of healthcare AI implementations
- Outsized Opportunity: Join at the ground floor of a rapidly scaling, VC-backed startup
- Ownership, Autonomy & Mastery: Full-stack ownership of customer programs, with freedom to innovate
- Learning & Growth: Constant exposure to founders, customers, and new technologies—plus a professional development budget
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability, or other legally protected statuses.
🚀 Hiring: Python Developer at Deqode
⭐ Experience: 4+ Years
⭐ Work Mode:- Remote
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Role Overview:
We are looking for a skilled Software Development Engineer (Python) to design, develop, and maintain scalable backend applications and high-performance RESTful APIs. The ideal candidate will work on modern microservices architecture, ensure clean and efficient code, and collaborate with cross-functional teams to deliver robust solutions.
Key Responsibilities:
- Develop and maintain RESTful APIs and backend services using Python
- Build scalable microservices and integrate third-party APIs
- Design and optimize database schemas and queries
- Ensure application security, performance, and reliability
- Write clean, testable, and maintainable code
- Participate in code reviews and follow best engineering practices
Mandatory Skills (3):
- Python – Strong hands-on experience in backend development
- FastAPI / REST API Development – Building and maintaining APIs
- SQLAlchemy / Relational Databases – Database modeling and optimization
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Brudite is a fast-growing IT services & consulting startup where careers are built through real projects, real responsibility, and continuous learning.
We believe fresh talent grows best when given trust, guidance, and ownership from day one.
At Brudite, you won’t just learn , you’ll build.
We are looking for Software Engineers who are eager to learn, curious about technology, and ready to take ownership of their work.
This role is ideal for fresh graduates who want hands-on exposure to real-world software development, global clients, and modern engineering practices.
Location: Work From Office
Shift: EST Time Zone (6:30 PM - 3:30 AM)
Experience: Freshers
We are looking for Software Engineers with strong technical fundamentals and logical problem-solving skills who are eager to learn and take ownership of their work.
This role is ideal for candidates who enjoy coding, thinking through problems, and building reliable software, not just following instructions.
Responsibilities
- Deliver assigned tasks and features with high quality and accountability
- Write clean, maintainable, and well-documented code
- Take ownership of tasks from understanding requirements to delivery
- Work directly with clients to understand technical requirements and expectations
- Collaborate closely with client engineering and product teams
- Continuously learn and apply new technologies and tools
Job Requirments:
- Take responsibility for delivering quality work
- Ownership mindset and positive attitude
- Strong written and verbal communication skills are essential for this role
- Excellent logical and analytical thinking skills
- Good understanding of Data structures & basic algorithms, OOPs concept
- Hands-on experience with programming language like Python
- Basic understanding of cloud concepts (AWS / GCP / Azure fundamentals)
- Ability to write and explain logical solutions
- Training / Internship / bootcamp experience is a plus
Apply Using : https://app.skillbrew.ai/jobs/?id=87
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices























