Cutshort logo
Python Jobs in Chennai

50+ Python Jobs in Chennai | Python Job openings in Chennai

Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chennai
4 - 15 yrs
₹30L - ₹40L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Generative AI
skill iconPython
Scikit-Learn
+4 more

About the Role

 

We are looking for a highly skilled Data Scientist with strong expertise in Machine Learning, MLOps, and Generative AI. The ideal candidate will have hands-on experience in building scalable ML models, deploying them in production, and working with modern AI frameworks, including GenAI technologies.

 

 

 

Key Responsibilities

 

·      Design, develop, and deploy machine learning models for real-world business problems

·      Work on end-to-end ML lifecycle: data preprocessing, model building, evaluation, deployment, and monitoring

·      Implement and manage MLOps pipelines for scalable and reproducible workflows

·      Utilize tools like MLflow for experiment tracking, model versioning, and lifecycle management

·      Develop and integrate Generative AI (GenAI) solutions such as LLM-based applications

·      Collaborate with cross-functional teams (engineering, product, business) to translate requirements into AI solutions

·      Optimize model performance and ensure production stability

·      Stay updated with the latest advancements in AI/ML and GenAI ecosystems

 

 

 

Required Skills & Qualifications

 

·      4+ years of experience in Data Science / Machine Learning

·      Strong programming skills in Python

·      Hands-on experience with ML modeling techniques (supervised, unsupervised, NLP, etc.)

·      Solid understanding of MLOps practices and tools

·      Experience with MLflow or similar model lifecycle tools 

·      Practical experience in Generative AI (GenAI), including working with LLMs

·      Experience with libraries/frameworks like Scikit-learn, TensorFlow, PyTorch

·      Strong understanding of data structures, algorithms, and statistics

·      Experience with cloud platforms (AWS/GCP/Azure) is a plus


Good to Have

 

·      Experience with LLM fine-tuning, prompt engineering, or RAG pipelines

·      Exposure to Docker, Kubernetes, and CI/CD pipelines

·      Knowledge of data engineering workflows 



Read more
Koolioai
Aishwaria SterlingJames
Posted by Aishwaria SterlingJames
Remote only
1 - 4 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconReact.js
skill iconFlask
Google Cloud Platform (GCP)

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.

Key Responsibilities:

  • Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
  • Design and build efficient, secure, and modular client-side and server-side architecture
  • Develop high-performance web applications with reusable and maintainable code
  • Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
  • Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
  • Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance

Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: Minimum of 2+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
  • Technical Skills:
  • Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
  • Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
  • Familiarity with NoSQL and PostgreSQL databases
  • Experience working with audio/video processing libraries is a strong plus
  • Soft Skills:
  • Strong problem-solving skills and the ability to think critically about issues and solutions
  • Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
  • Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
  • Keen attention to detail and a passion for delivering high-quality, scalable solutions
  • Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment

Compensation and Benefits:

  • Health Insurance: Comprehensive health coverage provided by the company
  • ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
1 - 3 yrs
₹8L - ₹17L / yr
Computer Vision
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Natural Language Processing (NLP)
+2 more

About Moative

Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Work you’ll do

As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.


Responsibilities

  • Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
  • Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
  • Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
  • Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions


Who you are

You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration. 


You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens


Skills & Requirements

  • 1-3 years of experience in programming languages such as Python or Scala
  • Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
  • Tuning and deploying foundation models, particularly for vision tasks and data extraction
  • Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
  • Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.


Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Read more
Remote only
0 - 2 yrs
₹0 / mo
skill iconPython
Problem solving
Communication Skills

AI & ML (45 Days – Live Hands‑On)

Program Fee: ₹25,000

Duration: 45 Days

Mode: Hybrid (Online Sessions + Live Lab Access)

Eligibility: Freshers, Final‑Year Students, and Career Switchers

 

About the Internship

This intensive 45‑day internship program is designed for freshers who want to build strong, industry‑relevant skills in Cloud Infrastructure, Cybersecurity, and AI/ML model development. The program offers live production hands‑on training, allowing interns to work on real-world architectures, security workflows, and AI/ML deployments.

Participants will receive end‑to‑end exposure to modern cloud platforms, DevOps practices, security operations, and machine learning deployment, making them job‑ready for roles like:

  • Cloud/Infra Engineer
  • DevOps Engineer
  • Security Analyst
  • AI/ML Engineer
  • Site Reliability Engineer (SRE)

  

Key Highlights

  • 45 days of practical, mentor-led training
  • Live production-style projects and deployments
  • Hands‑on experience with:
  • AWS / Azure Cloud
  • Terraform, CI/CD, Docker, Kubernetes
  • Security Hardening & IAM
  • Python ML pipelines & model deployment
  • Architect & deploy real systems using best practices
  • Build portfolio-ready projects
  • Receive an industry-recognized Internship Certificate

 

What You Will Learn

1️⃣ Cloud Infrastructure & DevOps

  • Cloud fundamentals (AWS/Azure/GCP)
  • Linux administration & scripting
  • VPC, Subnets, Routing, NAT, Firewalls
  • EC2 provisioning & autoscaling
  • Load balancing & High Availability
  • Terraform for Infrastructure as Code
  • CI/CD pipelines using Jenkins / GitHub Actions
  • Docker containerization & Kubernetes basics

 

2️⃣ Cybersecurity & Cloud Security

  • IAM roles, policies, access control
  • Server security & hardening
  • SSL/TLS, encryption, key management
  • Secure VPC & subnet design
  • Threat detection & logging
  • Secrets management
  • Network segmentation & firewall best practices

 

3️⃣ AI & Machine Learning

  • Python for ML
  • Supervised and unsupervised algorithms
  • Data preprocessing & model training
  • Model evaluation & optimization
  • Build ML inference APIs using FastAPI/Flask
  • Containerize and deploy ML models to cloud
  • Integrate monitoring for ML workflows

 

✅ Live Hands‑On Projects

Interns will work on real-world, production-grade projects such as:

  • Deploy a secure 3‑tier web application on cloud
  • Automate infra provisioning using Terraform
  • Build CI/CD pipelines for automated deployments
  • Harden servers & configure security groups, IAM
  • Develop and deploy an ML model as a cloud API
  • Create monitoring dashboards with Prometheus/Grafana
  • End-to-end system deployment with logging and alerting

 

Each intern will complete a Capstone Project and present it during the final evaluation.

 

✅ Internship Deliverables

  1. Internship Completion Certificate
  2. 3+ Production‑level projects
  3. GitHub portfolio with all code and deployments
  4. Cloud & ML documentation
  5. Resume enhancement and guidance
  6. Career mentoring + interview preparation

 

✅ Who Should Apply

This internship is ideal for:

  • Fresh graduates
  • Final‑year engineering or IT students
  • BSc, BCA, MCA, B.Tech learners
  • Professionals switching careers to Cloud/DevOps/AI
  • Anyone seeking hands‑on, real‑time industry experience

 

✅ Program Fee

₹25,000/- (includes training, labs, live‑project access, certificate, and mentorship)

 

✅ Certificate Provided

All participants will receive a verified Internship Certificate, including:

  • Candidate Name
  • Internship Duration & Dates
  • Skills Covered
  • Project Evaluation Score
  • Authorized Signatory & Company Seal
Read more
Hashone Career
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Noida
7 - 10 yrs
₹20L - ₹35L / yr
skill iconPython
skill iconDjango
SQL

ROLE SUMMARY

The Senior Python Developer designs, builds, and improves Python and Django applications. The role includes developing end‑to‑end integrations using REST and SOAP services and delivering reliable, scalable solutions through hands‑on coding and data transformation work. The developer works closely with Business Analysts, architects, and other teams to ensure technical solutions support business needs. Key responsibilities also include improving SQL performance, taking part in code reviews, supporting DevOps workflows with Git and Azure DevOps, and helping integrate GenAI features—such as GPT models, embeddings, and agent‑based tools—into enterprise applications.

ROLE RESPONSIBILITIES

  • Design and develop Python and Django applications that are scalable, secure, and maintainable.
  • Implement UI components using CSS, Bootstrap, jQuery, or similar technologies as needed.
  • Develop integrations with internal and external systems using REST, SOAP, and WSDL‑based services.
  • Create and optimize SQL queries, database structures, and data access logic to support application features.
  • Work with Business Analysts and stakeholders to translate functional requirements into technical specifications and solutions.
  • Implement accurate data mappings and transformations in accordance with business and technical requirements.
  • Contribute to code reviews, follow established coding standards, and ensure high‑quality deliverables.
  • Support the implementation and maintenance of DevOps pipelines using Git and Azure DevOps.
  • Contribute to the integration of GenAI capabilities—including GPT models, embeddings, and agent‑based components—into enterprise applications.
  • Troubleshoot issues across the application stack and collaborate closely with peers to resolve technical challenges.

TECHNICAL QUALIFICATIONS

  • 7+ years of hands‑on experience with Python and Django, including complex application development.
  • 5+ years of experience with SQL development, optimization, and database design.
  • At least 1-2 years of applied experience with GenAI technologies (GPT models, embeddings, agents, etc.).
  • Deep expertise in application architecture, system integration, and service‑oriented design.
  • Strong experience with DevOps tools and practices, including Git, Azure DevOps, CI/CD pipelines, and automated deployments.
  • Advanced understanding of REST, SOAP, WSDL, and large‑scale service integrations.

GENERAL QUALIFICATIONS

  • Exceptional verbal and written communication skills.
  • Strong analytical, problem‑solving, and architectural reasoning abilities.
  • Demonstrated leadership experience with the ability to guide and mentor technical teams.
  • Proven ability to work effectively in fast‑paced, collaborative environments.

EDUCATION REQUIREMENTS

  • Bachelor’s degree in Computer Science, MIS, or a related field.
  • Advanced certifications in Python, cloud technologies, or GenAI are preferred but not required.

 

Read more
Vintronics Consulting
Bengaluru (Bangalore), Chennai
5 - 10 yrs
₹10L - ₹28L / yr
skill iconC
skill iconC++
skill iconPython
skill iconJava
Linux kernel
+1 more

Job Title: Senior Linux Kernel Engineer

Experience: 5–10 Years

Location: Bangalore / Chennai

Domain: Enterprise Linux / Kernel Development


Job Summary

We are seeking a highly skilled Senior Linux Kernel Engineer with deep expertise in kernel development, debugging, and performance optimization. The role involves working on enterprise-grade Linux distributions, kernel lifecycle management, security patching, and low-level hardware integration.


Key Responsibilities

1. Kernel Lifecycle & Maintenance

  • Lead kernel upgrade strategies (e.g., LTS migrations such as 5.15 → 6.x) while ensuring stability and compatibility.
  • Perform patch porting across kernel versions, resolving API and dependency conflicts.
  • Track and mitigate security vulnerabilities by monitoring CVEs and upstream sources (e.g., LKML).
  • Backport critical fixes to production kernels without impacting system stability.

2. Debugging & System Stability

  • Act as an escalation point for kernel panics and system crashes.
  • Perform post-mortem analysis using kdump, crash, and gdb.
  • Debug early boot issues (UEFI, initramfs, kernel initialization).
  • Conduct performance analysis using eBPF, ftrace, and perf to optimize system behavior.

3. Driver Development & Hardware Integration

  • Design, develop, and maintain device drivers (network, storage, GPU, or character devices).
  • Work closely with hardware through DMA, interrupts (MSI-X), and register-level programming.
  • Maintain out-of-tree drivers using DKMS or similar frameworks.
  • Ensure compatibility of drivers across kernel updates.


Required Technical Skills

  • Programming: Strong expertise in C (mandatory) and C++
  • Kernel Internals: Deep understanding of:
  • Virtual File System (VFS)
  • Memory Management (MMU, Paging)
  • Process Scheduler
  • Linux Networking Stack
  • Debugging Tools:
  • kdump, crash, gdb
  • kprobes, trace-cmd, ftrace
  • perf, valgrind
  • Hardware debugging tools (JTAG, Serial Console)
  • Build Systems:
  • Kbuild, Makefiles
  • Kernel packaging (RPM/Debian)
  • Security:
  • Experience with CVE patching and backporting
  • Knowledge of SELinux/AppArmor
  • Kernel hardening (FIPS, KSPP)


Preferred Skills

  • Experience contributing to open-source kernel projects
  • Familiarity with Linux Kernel Mailing List (LKML) workflows
  • Exposure to enterprise Linux distributions (RHEL, Ubuntu, SUSE)
  • Experience with performance tuning and system optimization at scale



1. Core Programming (C Language)

  • Must have strong hands-on experience in C programming
  • Comfortable with pointers, memory management, and low-level concepts

2. Kernel Internals Expertise

  • Should have worked in at least one subsystem:
  • VFS / File Systems
  • Memory Management
  • Scheduler / Networking

3. Debugging & Crash Analysis

  • Experience handling kernel panics
  • Hands-on with vmcore analysis tools

4. Security & Patching

  • Understanding of CVE fixes and backporting

5. Driver Development

  • Experience in writing or maintaining device drivers

6. Performance & Advanced Debugging

  • Exposure to eBPF, ftrace, perf

7. Hardware-Level Understanding

  • Knowledge of DMA, interrupts, hardware interaction

Soft Skills

  • Strong analytical and problem-solving abilities
  • Excellent communication skills
  • Ability to work independently and in collaborative environments
  • Quick learner with adaptability to new technologies


Read more
Bengaluru (Bangalore), Chennai
5 - 10 yrs
₹10L - ₹25L / yr
skill iconC
skill iconC++
Linux/Unix
skill iconPython
DevOps
+4 more

Job Title: Cloud Development & Linux Debugging Engineer

Experience: 5–10 Years

Location: Bangalore / Chennai


Job Summary

We are looking for an experienced Cloud Development & Linux Debugging Engineer with strong expertise in Linux internals, system-level programming, and cloud technologies. The ideal candidate will have hands-on experience in developing, debugging, and optimizing Linux-based systems along with exposure to DevOps tools and containerized environments.


Key Responsibilities

  • Develop and debug software at the Linux system level (kernel/user space).
  • Work on Linux internals, low-level system components, and performance optimization.
  • Design, develop, and maintain applications using Python and C/C++.
  • Troubleshoot complex issues in Linux and cloud-based environments.
  • Collaborate with cross-functional teams in an Agile/Scrum environment.
  • Contribute to automation and infrastructure using DevOps tools.
  • Work with containerized and cloud platforms such as Kubernetes and OpenStack.


Required Skills

  • Strong experience in Linux software development (Linux internals, system-level programming).
  • Proficiency in Python and C/C++.
  • Solid debugging and analytical skills.
  • Hands-on experience with Ansible, Puppet, and DevOps practices.
  • Experience working with OpenStack and Kubernetes.
  • Good understanding of Agile/Scrum methodologies.
  • Excellent communication and teamwork skills.


Preferred Skills (Good to Have)

  • Experience with Go / Golang and Go templating.
  • Knowledge of Kubernetes Operators and Helm.
  • Exposure to containerization technologies (Docker, Kubernetes).
  • Contributions to open-source projects.
  • Experience with cloud-native architectures.


Qualifications

  • Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
  • Self-driven individual with a strong learning mindset.
  • Ability to work independently and in collaborative team environments.


Read more
Chennai
0 - 1 yrs
₹1.8L - ₹2.4L / yr
Powershell
skill iconPython
skill iconJavascript


Location: Chennai (Hybrid Model)

Commitment: Minimum 2 Years (Excluding 3 months of Probation)

Experience Level: Fresher / Entry Level


About the Role

We are looking for enthusiastic and fast‑learning fresh graduates to join our Infrastructure & Security Engineering team. This role involves hands‑on work in system administration, implementation of infrastructure and security components, and continuous learning across multiple technology vendors and cloud environments including Microsoft, AWS, GCP, and others.

You will receive extensive training, mentorship, and opportunities to work directly with customers to demonstrate new products and solutions.


Key Responsibilities


Infrastructure & System Administration

  • Assist in the deployment, configuration, and administration of IT infrastructure components (servers, networks, cloud services, and security tools).
  • Work with multi‑vendor environments such as Microsoft, AWS, GCP, and other OEMs.
  • Support day‑to‑day system monitoring, performance checks, and troubleshooting activities.

Security Implementation

  • Participate in the implementation and maintenance of security solutions including identity management, endpoint security, SIEM, firewalls, and cloud security tools.
  • Learn and follow best practices for secure configurations and compliance requirements.

Scripting & Automation

  • Develop automation scripts using PowerShell, Python, and JavaScript to streamline operational tasks.
  • Contribute to internal automation projects and efficiency improvement initiatives.

AI/ML Exposure

  • Gain foundational understanding of AI & ML product development.
  • Assist in integrating AI capabilities into internal or customer‑facing tools where applicable.

Customer Engagement

  • Learn and perform product demos for customers on demand.
  • Participate in customer visit and meetings alongside senior team members to support solution discussions.
  • Present technical concepts in clear and professional English.


Required Skills

  • Basic understanding of system administration, networking, cloud fundamentals, or security concepts.
  • Strong scripting capabilities in PowerShell, Python, and JavaScript.
  • Curiosity and willingness to learn AI/ML‑related product development.
  • Excellent verbal and written English communication skills.
  • Ability to quickly learn new technologies and adapt to dynamic project needs.

Who Should Apply?

  • Fresh graduates (B.E/B.Tech/B.Sc/BCA/MCA or equivalent) passionate about IT infrastructure, security, cloud, and automation.
  • Individuals who are eager to learn, enthusiastic about hands‑on work, and comfortable interacting with customers.
  • Candidates willing to commit 2 years to grow within the organization as we invest in extensive training and development.

Work Model

  • Hybrid, based in Chennai, with flexibility to work from both office and home as needed.

What We Offer

  • Structured training in multi‑cloud, security, scripting, and automation.
  • Hands‑on exposure to real‑world implementation projects.
  • Opportunities to explore AI/ML product workflows.
  • Mentorship from experienced engineers and architects.
  • Career growth into Infra Engineer, Security Engineer, Cloud Engineer, Automation Engineer, or AI/ML Solution Specialist.


Read more
Foss Infotech
HR Foss
Posted by HR Foss
Chennai, Coimbatore
2 - 5 yrs
₹3L - ₹7L / yr
skill iconPython
Odoo (OpenERP)
skill iconPostgreSQL
skill iconJavascript

Role: ODOO Developer

Exp: 2+ Years

Location : Chennai

Preferred : Chennai Based Candidates


Key Responsibilities

  • Develop and customise Odoo modules based on business requirements.
  • Design, develop, and maintain ERP applications using the Odoo framework.
  • Implement and customise Odoo Manufacturing (MRP) modules including Work Orders, Bills of Materials (BoM), Routings, and Production Planning.
  • Integrate third-party applications and APIs using web services.
  • Work with the PostgreSQL database for data management, optimisation, and administration.
  • Develop Odoo views, reports, and UI components using HTML, CSS, XML.
  • Support server deployment, troubleshooting, and performance optimisation of Odoo applications.
  • Understand and enhance existing Odoo functionalities and provide technical improvements.
  • Collaborate with functional teams to translate business requirements into technical solutions.
  • Interact with clients and functional teams to understand requirements and support project delivery.


Required Skills

  • 2 years of experience in Odoo (OpenERP) development and customisation.
  • Hands-on experience in Odoo Manufacturing (MRP) module implementation and customisation.
  • Familiarity with Python web frameworks such as Django or Flask.
  • Strong understanding of Object-Orientated Programming (OOP).
  • Experience with web services and API integrations.
  • Experience with PostgreSQL database management and optimisation.
  • Understanding of ORM (Object Relational Mapper) frameworks.
  • Knowledge of server deployment and troubleshooting.



Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconPython
Generative AI
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
FastAPI
+3 more

Job Title: Python Backend / GenAI Engineer (4+ Years)

Job Summary:

Looking for a Python Backend Engineer with experience in Generative AI, LangGraph workflows, data engineering, and AI evaluation using Arize AI.

Responsibilities

* Develop backend APIs using Python (FastAPI / Flask / Django)

* Build Generative AI and RAG-based applications

* Design LangGraph / agent workflows

* Create data engineering pipelines (ETL, data processing)

* Implement LLM monitoring and evaluation using Arize AI

* Integrate vector databases and AI services

* Maintain scalable and production-ready backend systems

Required Skills

* 4+ years of Python backend development

* Experience in Generative AI / LLM applications

* Knowledge of LangGraph / LangChain

* Experience in data engineering pipelines

* Familiarity with Arize AI or model evaluation tools

* Understanding of REST APIs, databases, Docker

Good to Have

* Cloud platforms (Azure / AWS )

* Vector databases (FAISS, Pinecone, Azure AI Search)


Read more
House Of Shipping
Sanikha M
Posted by Sanikha M
Chennai
3 - 8 yrs
₹8L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
API
+1 more

Key Responsibilities

  • Design, develop, and maintain microservices and APIs running on GKE, Cloud Run, App Engine, and Cloud Functions.
  • Build secure, scalable REST and GraphQL APIs to support Our Client front-end applications and integrations.
  • Work with the GCP Architect to ensure back-end design aligns with enterprise architecture and security best practices.
  • Implement integration layers between GCP-hosted services, AlloyDB, Cloud Spanner, Cloud SQL, and third-party APIs.
  • Deploy services using Gemini Code Assist, CLI tools, and Git-based CI/CD pipelines.
  • Optimize service performance, scalability, and cost efficiency.
  • Implement authentication, authorization, and role-based access control using GCP Identity Platform / IAM.
  • Work with AI/ML services (e.g., Vertex AI, Document AI, NLP APIs) to enable intelligent back-end capabilities.
  • Collaborate with front-end developers to design efficient data contracts and API payloads.
  • Participate in code reviews and enforce clean, maintainable coding standards.

Experience & Qualifications

  • 6–8 years of back-end development experience, with at least 3+ years in senior/lead analyst roles.
  • Proficiency in one or more back-end programming languages: Node.js, Python, or Java.
  • Strong experience with GCP microservices deployments on GKE, App Engine, Cloud Run, and Cloud Functions.
  • Deep knowledge of AlloyDB, Cloud Spanner, and Cloud SQL for schema design and query optimization.
  • Experience in API development (REST/GraphQL) and integration best practices.
  • Familiarity with Gemini Code Assist for code generation and CLI-based deployments.
  • Understanding of Git-based CI/CD workflows and DevOps practices.
  • Experience integrating AI tools into back-end workflows.
  • Strong understanding of cloud security and compliance requirements.
  • Excellent communication skills for working in a distributed/global team environment.


Read more
Amura Health

at Amura Health

3 candid answers
1 video
supriya C
Posted by supriya C
Chennai
4 - 14 yrs
₹20L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconAmazon Web Services (AWS)

Role Overview:

We are seeking a Tech Lead (5–14 yrs experience) to design, build, and scale the technology

foundation for our Support Excellence function. This role sits at the intersection of engineering,

product, and operations ensuring that our internal teams and, eventually, our end users

experience seamless, efficient, and data-driven support.

You will lead a small but high-impact team of engineers, own the support tooling roadmap, and

implement solutions that handle ticket triage, data quality issues, automation, and integrations

with our healthcare SaaS platform.

This is a hands-on technical leadership role ideal for someone who thrives on solving

operational challenges through technology, building frameworks from scratch, and enabling

customer-facing and internal support teams to scale effectively.

Key Responsibilities

1. Build & Enhance Support Platform:

● Own the engineering roadmap for support tooling — ticketing systems, triage workflows,

knowledge bases, and automation bots.

● Design and implement scalable support frameworks, ensuring fast triage, data-driven

escalation, and high-quality resolution.

● Integrate support tooling with product backend, CMS, and analytics systems to enable

context-aware assistance.

2. Technical Leadership & Delivery:

● Lead a small team of engineers (SEs and SSEs), providing guidance on design, architecture,

and coding standards.

● Stay hands-on with coding and reviews while enabling the team to deliver high-quality,

maintainable solutions.

● Partner closely with Program Managers and Business Analysts to translate requirements

into technical execution.

3. Automation, Data & AI-Driven Support:

● Implement automation workflows (bots, routing, notifications) to reduce manual load and

optimize SLA adherence.

● Drive the adoption of AI/ML solutions for ticket classification, triage, and predictive

resolution.

● Build analytics dashboards to track support KPIs (FRT, TTR, resolution quality).

● Partner with Product Managers, Designers, and Engineers to ensure delivery fidelity.

● Restore transparency and speed between business stakeholders and tech teams.

4. Cross-functional Collaboration:

● Work with Product, QA, Customer Success, and Ops to ensure support needs are captured

early in the roadmap.

● Serve as the engineering voice in discussions on escalation flows, release readiness, and

customer-facing support enablement.

● Collaborate with content and ops teams to power self-service experiences (knowledge

bases, FAQs, in-app help).

5. Documentation & Knowledge Management:

● Maintain technical documentation for support tooling, workflows, and integrations.

● Contribute to knowledge bases (internal + external) alongside ops and content teams.

● Foster a documentation-first culture to enable faster onboarding and smoother

collaboration.

What We’re Looking For

Must-Have

● 5–7 years of experience in software engineering, with at least 2+ years in a senior/lead

role.

● Proven track record in building internal platforms, support tools, or automation systems.

● Strong technical skills: Python/Node/Java, SQL, cloud services (AWS/GCP), and integration

experience with ticketing/support platforms (e.g., Zendesk, Freshdesk, ServiceNow, Jira

Service Management).

● Experience leading small teams and owning delivery from design → build → release.

● Excellent problem-solving skills with a bias for execution in fast-paced environments.

Nice to Have

● Exposure to SaaS or healthcare platforms with multi-tenant architecture.

● Familiarity with AI/ML-driven support solutions (classification, prediction, NLP chatbots).

● Hands-on experience with support metrics and dashboards (CSAT, SLA adherence, TTR).

● Knowledge of documentation frameworks (Confluence, Notion, Git-based wikis).

Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 7 yrs
₹15L - ₹25L / yr
skill iconPython
Generative AI
skill iconFlask
skill iconDjango
FastAPI
+4 more

Job Title: Python Backend / GenAI Engineer (4+ Years)

Job Summary

Looking for a Python Backend Engineer with experience in Generative AI, LangGraph workflows, data engineering, and AI evaluation using Arize AI.

Responsibilities

* Develop backend APIs using Python (FastAPI / Flask / Django)

* Build Generative AI and RAG-based applications

* Design LangGraph / agent workflows

* Create data engineering pipelines (ETL, data processing)

* Implement LLM monitoring and evaluation using Arize AI

* Integrate vector databases and AI services

* Maintain scalable and production-ready backend systems

Required Skills

* 4+ years of Python backend development

* Experience in Generative AI / LLM applications

* Knowledge of LangGraph / LangChain

* Experience in data engineering pipelines

* Familiarity with Arize AI or model evaluation tools

* Understanding of REST APIs, databases, Docker

Good to Have

* Cloud platforms (Azure / AWS )

* Vector databases (FAISS, Pinecone, Azure AI Search)

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Chennai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
skill iconData Science
skill iconPython
Forecasting
skill iconMachine Learning (ML)

Hi,


Greetings from Ampera!


we are looking for a Data Scientist with strong Python & Forecasting experience.


Title                               : Data Scientist – Python & Forecasting

Experience                   : 4 to 7 Yrs

Location                       : Chennai/Bengaluru

Type of hire                  : PWD and Non PWD

Employment Type     : Full Time

Notice Period             : Immediate Joiner

Working hours           : 09:00 a.m. to 06:00 p.m.

Workdays                   : Mon - Fri

 

 

Job Description:

 

We are looking for an experienced Data Scientist with strong expertise in Python programming and forecasting techniques. The ideal candidate should have hands-on experience building predictive and time-series forecasting models, working with large datasets, and deploying scalable solutions in production environments.


Key Responsibilities

  • Develop and implement forecasting models (time-series and machine learning based).
  • Perform exploratory data analysis (EDA), feature engineering, and model validation.
  • Build, test, and optimize predictive models for business use cases such as demand forecasting, revenue prediction, trend analysis, etc.
  • Design, train, validate, and optimize machine learning models for real-world business use cases.
  • Apply appropriate ML algorithms based on business problems and data characteristics
  • Write clean, modular, and production-ready Python code.
  • Work extensively with Python Packages & libraries for data processing and modelling.
  • Collaborate with Data Engineers and stakeholders to deploy models into production.
  • Monitor model performance and improve accuracy through continuous tuning.
  • Document methodologies, assumptions, and results clearly for business teams.

 

Technical Skills Required:

Programming

  • Strong proficiency in Python
  • Experience with Pandas, NumPy, Scikit-learn

Forecasting & Modelling

  • Hands-on experience in Time Series Forecasting (ARIMA, SARIMA, Prophet, etc.)
  • Experience with ML-based forecasting models (XGBoost, LightGBM, Random Forest, etc.)
  • Understanding of seasonality, trend decomposition, and statistical modeling

Data & Deployment

  • Experience handling structured and large datasets
  • SQL proficiency
  • Exposure to model deployment (API-based deployment preferred)
  • Knowledge of MLOps concepts is an added advantage

Tools (Preferred)

  • TensorFlow / PyTorch (optional)
  • Airflow / MLflow
  • Cloud platforms (AWS / Azure / GCP)


Educational Qualification

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Mathematics, or related field.


Key Competencies

  • Strong analytical and problem-solving skills
  • Ability to communicate insights to technical and non-technical stakeholders
  • Experience working in agile or fast-paced environments


Accessibility & Inclusion Statement

We are committed to creating an inclusive environment for all employees, including persons with disabilities. Reasonable accommodations will be provided upon request.

Equal Opportunity Employer (EOE) Statement

Ampera Technologies is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

Read more
Greatify
Ciline Sanjanyaa
Posted by Ciline Sanjanyaa
Chennai
8 - 11 yrs
₹7L - ₹17L / yr
Playwright
skill iconPython
Appium
Test management
Automation
+3 more

About the job:

Job Title: QA Lead

Location: Teynampet, Chennai

Job Type: Full-time

Experience Level: 8+ Years

Company: Gigadesk Technologies Pvt. Ltd. [Greatify]

Website: www.greatify.ai


Company Description:

At Greatify.ai, we lead the transformation of educational institutions with state-of-the-art AI-driven solutions. Serving 100+ institutions globally, our mission is to unlock their full potential by enhancing learning experiences and streamlining operations. Join us to empower the future of education with innovative technology.


Job Description:

We are looking for a QA Lead to own and drive the quality strategy across our product suite. This role combines hands-on automation expertise with team leadership, process ownership, and cross-functional collaboration.

As a QA Lead, you will define testing standards, guide the QA team, ensure high test coverage across web and mobile platforms, and act as the quality gatekeeper for all releases.


Key Responsibilities:

● Leadership & Ownership

  • Lead and mentor the QA team, including automation and manual testers
  • Define QA strategy, test plans, and quality benchmarks across products
  • Own release quality and provide go/no-go decisions for deployments
  • Collaborate closely with Engineering, Product, and DevOps teams

● Automation & Testing

  • Oversee and contribute to automation using Playwright (Python) for web applications
  • Guide mobile automation efforts using Appium (iOS & Android)
  • Ensure comprehensive functional, regression, integration, and smoke test coverage
  • Review automation code for scalability, maintainability, and best practices

● Process & Quality Improvement

  • Establish and improve QA processes, documentation, and reporting
  • Drive shift-left testing and early defect detection
  • Ensure API testing coverage and support performance/load testing initiatives
  • Track QA metrics, defect trends, and release health

● Stakeholder Collaboration

  • Work with Product Managers to understand requirements and define acceptance criteria
  • Communicate quality risks, timelines, and test results clearly to stakeholders
  • Act as the single point of accountability for QA deliverables


Skills & Qualifications:

● Required Skills

  • Strong experience in QA leadership or senior QA roles
  • Proficiency in Python-based test automation
  • Hands-on experience with Playwright for web automation
  • Experience with Appium for mobile automation
  • Strong understanding of REST API testing (Postman / automation scripts)
  • Experience integrating tests into CI/CD pipelines (GitLab CI, Jenkins, etc.)
  • Solid understanding of SDLC, STLC, and Agile methodologies

● Good to Have

  • Exposure to performance/load testing tools (Locust, JMeter, k6)
  • Experience in EdTech or large-scale transactional systems
  • Knowledge of cloud-based environments and release workflows.
Read more
Greatify
Ciline Sanjanyaa
Posted by Ciline Sanjanyaa
Chennai
5 - 9 yrs
₹5L - ₹15L / yr
Playwright
skill iconPython
Test Automation (QA)
Manual testing
Appium
+1 more

ABOUT THE JOB:

Job Title: QA Automation Specialist

Location: Teynampet, Chennai

Job Type: Full-time

Company: Gigadesk Technologies Pvt. Ltd.[Greatify.ai]


COMPANY DESCRIPTION:

At Greatify.ai, we are transforming educational institutions with cutting-edge AI-powered solutions. Our platform acts as a smart operating system for colleges, schools, and universities—enhancing learning, streamlining operations, and maximizing efficiency. With 100+ institutions served, 100,000+ students impacted globally, and 1,000+ educators empowered, we are redefining the future of education.


COMPANY WEBSITE: https://www.greatify.ai/


ROLE OVERVIEW:

We are seeking a QA Automation Engineer to strengthen our quality assurance capabilities across a rapidly scaling product suite. The ideal candidate will champion end-to-end automation coverage for both web and native mobile platforms, ensuring robust, scalable, and maintainable test infrastructure. You will primarily work with our existing Playwright-Python automation framework, and extend its reach across modules, services, and platforms. This role demands strong strategic ownership and hands-on scripting ability in a fast-paced product engineering environment.


KEY RESPONSIBILITIES:

● Design, implement, and maintain automation test suites for web and mobile applications with a strong emphasis on modularity and reuse.

● Create end-to-end test workflows using Playwright (Python) covering UI validation, user journeys, and edge-case scenarios.

● Develop native mobile automation scripts using Appium for both iOS and Android platforms.

● Ensure comprehensive coverage across functional, regression, smoke, and integration testing stages.

● Perform REST API automation with Python frameworks or Postman (including auth flows, payload validation, status codes, schema validation).

● Support and extend performance/load/stress testing suites using tools such as Locust, JMeter, or k6, especially for high-traffic modules.

● Integrate test automation into existing CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions, including parallel test execution and automated reporting.

● Collaborate with cross-functional squads—product managers, developers, and UX designers—to define acceptance criteria and ensure fast, high-quality delivery.

● Contribute to QA documentation including test plans, bug reports, and coverage maps, ensuring traceability and audit readiness.

● Apply version control best practices (Git) and actively participate in code reviews and knowledge-sharing sessions.

● Continuously optimize test data management, especially for environments requiring mocked services or synthetic data generation.

● Lead efforts in flaky test detection, test suite triage, and root cause analysis of test failures.


KEY SKILLS:

● Strong coding skills in Python, with the ability to architect and extend automation frameworks from scratch.

● Experience using Playwright with Python for dynamic web UI test cases, including Shadow DOM handling and selector strategy.

● Proficiency with Appium and emulator/simulator setups for mobile test environments.

● Solid understanding of software development lifecycle (SDLC) and QA methodologies including agile/DevOps workflows.

● Familiarity with test reporting tools (e.g., Allure, TestNG, ReportPortal) for visual insights into automation health.

● Experience in cloud-based test execution platforms such as BrowserStack, LambdaTest, or Sauce Labs.

● Exposure to containerization tools (Docker) for isolated test environments.

● Understanding of security and accessibility testing would be a plus.

Read more
BigThinkCode Technologies
Kumar AGS
Posted by Kumar AGS
Chennai
2 - 5 yrs
₹1L - ₹15L / yr
skill iconPython
skill iconDjango
API
FastAPI

At BigThinkCode, our technology solves complex problems. We are looking for a talented engineer to join our technology team at Chennai.

  

This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.

 

Below job description for your reference, if interested please share your profile to connect and discuss.

 

Company: BigThinkCode Technologies

URL: https://www.bigthinkcode.com/

Job Role: Software Engineer / Senior Software Engineer

Experience: 2 - 5 years

Work location: Chennai

Work Mode: Hybrid

Joining time: Immediate – 4 weeks


Responsibilities

  • Build and enhance backend features as part of the tech team.
  • Take ownership of features end-to-end in a fast-paced product/startup environment.
  • Collaborate with managers, designers, and engineers to deliver user-facing functionality.
  • Design and implement scalable REST APIs and supporting backend systems.
  • Write clean, reusable, well-tested code and contribute to internal libraries.
  • Participate in requirement discussions and translate business needs into technical tasks.
  • Support the technical roadmap through architectural input and continuous improvement.

 

Required Skills:

  • Strong understanding of Algorithms, Data Structures, and OOP principles.
  • Integrate with third-party systems (payment/SMS APIs, mapping services, etc.).
  • Proficiency in Python and experience with at least one framework (Flask / Django / FastAPI).
  • Hands-on experience with design patterns, debugging, and unit testing (pytest/unittest).
  • Working knowledge of relational or NoSQL databases and ORMs (SQLAlchemy / Django ORM).
  • Familiarity with asynchronous programming (async/await, FastAPI async).
  • Experience with caching mechanisms (Redis).
  • Ability to perform code reviews and maintain code quality.
  • Exposure to cloud platforms (AWS/Azure/GCP) and containerization (Docker).
  • Experience with CI/CD pipelines.
  • Basic understanding of message brokers (RabbitMQ / Kafka / Redis streams).

 

Benefits:

· Medical cover for employee and eligible dependents.

· Tax beneficial salary structure.

· Comprehensive leave policy

· Competency development training programs.

 

Read more
Pentabay Softwares

at Pentabay Softwares

1 recruiter
Sandhiya M
Posted by Sandhiya M
Chennai
0.5 - 4 yrs
₹2L - ₹6L / yr
skill iconMongoDB
express js
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+4 more

Job Title: MERN Stack Developer

Company: Pentabay Softwares

Location: Anna Salai (Mount Road), Chennai

Employment Type: Full-Time

Experience Required: 1–4 Years


Job Description:


Pentabay Softwares is looking for a skilled and motivated MERN Stack Developer to join our growing team. The ideal candidate should have hands-on experience in developing scalable web applications using MongoDB, Express.js, React.js, and Node.js.


Roles & Responsibilities:

  • Develop and maintain web applications using the MERN stack
  • Build reusable, efficient, and scalable code
  • Collaborate with UI/UX designers and backend teams
  • Design and integrate RESTful APIs
  • Troubleshoot, debug, and optimize application performance
  • Participate in code reviews and follow best development practices
  • Work closely with project managers to meet deadlines


Required Skills:

  • Strong experience with MongoDB, Express.js, React.js, and Node.js
  • Proficiency in JavaScript (ES6+), HTML5, and CSS3
  • Experience with REST APIs and third-party integrations
  • Knowledge of Git/version control systems
  • Basic understanding of security and performance optimization
  • Familiarity with Agile/Scrum methodology

Good to Have:

  • Experience with Redux, Next.js, or TypeScript
  • Exposure to cloud platforms (AWS, Azure, or GCP)
  • Understanding of CI/CD pipelines

Who Can Apply:

  • Candidates with 1–4 years of relevant experience
  • Strong problem-solving and communication skills
  • Ability to work independently and as part of a team

Work Location:

📍 Anna Salai (Mount Road), Chennai

Read more
Indore, Chennai
3 - 7 yrs
₹5L - ₹15L / yr
Data engineering
skill iconPython
Apache
databricks

Required Skills & Qualifications

Technical Skills

  • Strong hands-on experience with Databricks and Apache Spark.
  • Proficiency in Python and SQL.
  • Proven experience in data mapping, transformation, and data modeling.
  • Experience integrating data from APIs, databases, and cloud storage.
  • Solid understanding of ETL/ELT concepts and data warehousing principles.


Key Responsibilities

Data Source Identification & Quality Assessment

Data Mapping & Integration

  • Define and maintain comprehensive data mapping between source systems and Databricks tables.
  • Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.

Databricks & Data Modeling

  • Develop and optimize Databricks workloads using Spark and Delta Lake.
  • Design efficient data models optimized for performance, analytics, and API consumption.


Read more
Rezolveai

at Rezolveai

1 candid answer
Udaya Reddy
Posted by Udaya Reddy
Bengaluru (Bangalore), Chennai
3 - 5 yrs
₹1L - ₹1L / yr
skill iconPython
Vector database
Large Language Models (LLM)

As a Python Engineer, you will play a critical role in building and scaling data pipelines, developing prompts for large language models (LLMs), and deploying them as efficient, scalable APIs. You will collaborate closely with data scientists, product managers, and other engineers to ensure seamless integration of data solutions and LLM functionalities. This role requires expertise in Python, API design, data engineering tools, and a strong understanding of LLMs and their applications.

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

Read more
Neuvamacro Technology Pvt Ltd
Chennai, hybrid
0 - 0 yrs
₹2L - ₹2.5L / yr
Fullstack Developer
MEAN stack
MERN Stack
skill iconDjango
skill iconJavascript
+7 more

About the Role

We are looking for highly motivated React Fresher Developers who are passionate about building modern, scalable web applications. If you have completed a full-stack or React development course and are eager to apply your skills in real-world projects, we would love to hear from you.

This role is ideal for individuals who are proactive, eager to learn, and ready to contribute to dynamic, collaborative teams.

Mandatory Requirements

Completed a 6-month structured certification program in one of the following:

  • Full-Stack Development (MERN / MEAN / Django + React)
  • React.js Development

Solid understanding of:

  • React fundamentals (Hooks, Props, State, Components)
  • Modern JavaScript (ES6+)
  • REST APIs and asynchronous data handling
  • Git / GitHub (basic usage acceptable)
  • At least one completed academic or internship project demonstrating your coding skills.

Preferred / Bonus Advantage

Experience with chatbots or AI-powered conversational interfaces is a plus. This could include:

  • Platforms / frameworks like Dialogflow, RASA, IBM Watson, Botpress, or OpenAI API
  • Custom chatbot development using Node.js or Python
  • A project demonstrating chatbot integration or implementation

Note: A chatbot-related project or feature will be considered a strong plus.

Responsibilities

  • Develop responsive and interactive UI components using React.js and modern front-end technologies
  • Collaborate with backend teams to integrate APIs and contribute to product features
  • Participate in code reviews, testing, and deployment processes
  • Continuously explore, learn, and implement new technologies and best practices

NOTE: Laptop with high speed internet is mandatory and the candidate must be based in Chennai.

Read more
Coinfantasy
Indira Priyadharshini
Posted by Indira Priyadharshini
Chennai
6 - 15 yrs
₹10L - ₹40L / yr
skill iconPython
PyTorch
Large Language Models (LLM) tuning
Large Language Models (LLM)
Generative AI
+2 more

CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.

Job Responsibilities

  • Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
  • Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
  • Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
  • Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
  • Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
  • Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
  • Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.

Requirements

  • Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
  • 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
  • Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
  • Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
  • Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
  • Strong communication skills, adept at presenting complex technical solutions to diverse audiences.

About Us

CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.

Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.

Website:

Benefits

  • Competitive Salary
  • An opportunity to be part of the Core team in a fast-growing company
  • A fulfilling, challenging and flexible work experience
  • Practically unlimited professional and career growth opportunities

Read more
Chennai
0 - 0 yrs
₹2.5L - ₹3L / yr
skill iconPython
skill iconJava
skill iconJavascript
SQL
skill iconGit
+3 more

We are seeking enthusiastic and motivated fresh graduates with a strong foundation in programming, primarily in Python, and basic knowledge of Java, C#, or JavaScript. This role offers hands-on experience in developing applications, writing clean code, and collaborating on real-world projects under expert guidance.


Key Responsibilities

• Develop and maintain applications using Python as the primary language.

• Assist in coding, debugging, and testing software modules in Java, C#, or JavaScript as needed.

• Collaborate with senior developers to learn best practices and contribute to project deliverables.

• Write clean, efficient, and well-documented code.

• Participate in code reviews and follow standard development processes.

• Continuously learn and adapt to new technologies and frameworks.


Core Expectations

• Eagerness to Learn: Open to acquiring new programming skills and frameworks.

• Adaptability: Ability to work across multiple languages and environments.

• Problem-Solving: Strong analytical skills to troubleshoot and debug issues.

• Team Collaboration: Work effectively with peers and seniors.

• Professionalism: Good communication skills and a positive attitude.


Qualifications

• Bachelor’s degree in Computer Science, IT, or related field.

• Strong understanding of Python (OOP, data structures, basic frameworks like Flask/Django).

• Basic knowledge of Java, C#, or JavaScript.

• Familiarity with version control systems (Git).

• Understanding of databases (SQL/NoSQL) is a plus.

NOTE: Laptop with high speed internet is mandatory

Read more
Semiconductor Manufacturing Industry

Semiconductor Manufacturing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
5 - 8 yrs
₹40L - ₹48L / yr
skill iconPython
skill iconMachine Learning (ML)
Image Processing
skill iconDeep Learning
Algorithms
+28 more

🎯 Ideal Candidate Profile:

This role requires a seasoned engineer/scientist with a strong academic background from a premier institution and significant hands-on experience in deep learning (specifically image processing) within a hardware or product manufacturing environment.


📋 Must-Have Requirements:

Experience & Education Combinations:

Candidates must meet one of the following criteria:

  • Doctorate (PhD) + 2 years of related work experience
  • Master's Degree + 5 years of related work experience
  • Bachelor's Degree + 7 years of related work experience


Technical Skills:

  • Minimum 5 years of hands-on experience in all of the following:
  • Python
  • Deep Learning (DL)
  • Machine Learning (ML)
  • Algorithm Development
  • Image Processing
  • 3.5 to 4 years of strong proficiency with PyTorch OR TensorFlow / Keras.


Industry & Institute:

  • Education: Must be from a premier institute (IIT, IISC, IIIT, NIT, BITS) or a recognized regional tier 1 college.
  • Industry: Current or past experience in a Product, Semiconductor, or Hardware Manufacturing company is mandatory.
  • Preference: Candidates from engineering product companies are strongly preferred.


ℹ️ Additional Role Details:

  • Interview Process: 3 technical rounds followed by 1 HR round.
  • Work Model: Hybrid (requiring 3 days per week in the office).


Based on the job description you provided, here is a detailed breakdown of the Required Skills and Qualifications for this AI/ML/LLM role, formatted for clarity.


📝 Required Skills and Competencies:

💻 Programming & ML Prototyping:

  • Strong Proficiency: Python, Data Structures, and Algorithms.
  • Hands-on Experience: NumPy, Pandas, Scikit-learn (for ML prototyping).


🤖 Machine Learning Frameworks:

  • Core Concepts: Solid understanding of:
  • Supervised/Unsupervised Learning
  • Regularization
  • Feature Engineering
  • Model Selection
  • Cross-Validation
  • Ensemble Methods: Experience with models like XGBoost and LightGBM.


🧠 Deep Learning Techniques:

  • Frameworks: Proficiency with PyTorch OR TensorFlow / Keras.
  • Architectures: Knowledge of:
  • Convolutional Neural Networks (CNNs)
  • Recurrent Neural Networks (RNNs)
  • Long Short-Term Memory networks (LSTMs)
  • Transformers
  • Attention Mechanisms
  • Optimization: Familiarity with optimization techniques (e.g., Adam, SGD), Dropout, and Batch Normalization.


💬 LLMs & RAG (Retrieval-Augmented Generation):

  • Hugging Face: Experience with the Transformers library (tokenizers, embeddings, model fine-tuning).
  • Vector Databases: Familiarity with Milvus, FAISS, Pinecone, or ElasticSearch.
  • Advanced Techniques: Proficiency in:
  • Prompt Engineering
  • Function/Tool Calling
  • JSON Schema Outputs


🛠️ Data & Tools:

  • Data Management: SQL fundamentals; exposure to data wrangling and pipelines.
  • Tools: Experience with Git/GitHub, Jupyter, and basic Docker.


🎓 Minimum Qualifications (Experience & Education Combinations):

Candidates must have experience building AI systems/solutions with Machine Learning, Deep Learning, and LLMs, meeting one of the following criteria:

  • Doctorate (Academic) Degree + 2 years of related work experience.
  • Master's Level Degree + 5 years of related work experience.
  • Bachelor's Level Degree + 7 years of related work experience.


⭐ Preferred Traits and Mindset:

  • Academic Foundation: Solid academic background with strong applied ML/DL exposure.
  • Curiosity: Eagerness to learn cutting-edge AI and willingness to experiment.
  • Communication: Clear communicator who can explain ML/LLM trade-offs simply.
  • Ownership: Strong problem-solving and ownership mindset.
Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
7 - 10 yrs
₹10L - ₹18L / yr
full stack
skill iconReact.js
skill iconPython
skill iconGo Programming (Golang)
CI/CD
+9 more

Full-Stack Developer

Exp: 5+ years required

Night shift: 8 PM-5 AM/9PM-6 AM

Only Immediate Joinee Can Apply


We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.

Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.

Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.


Read more
Hunarstreet technologies pvt ltd

Hunarstreet technologies pvt ltd

Agency job
Chennai, Hyderabad, Bengaluru (Bangalore), Mumbai, Pune, Gurugram, Mohali, Panchkula
5 - 15 yrs
₹10L - ₹15L / yr
Fullstack Developer
Web Development
skill iconJavascript
TypeScript
skill iconGo Programming (Golang)
+5 more

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.


Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.


Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.

Read more
Resulticks
Sagadevan Ramamoorthy
Posted by Sagadevan Ramamoorthy
Chennai
7 - 12 yrs
₹20L - ₹32L / yr
skill iconPython
pandas
NumPy

What you’ll do here:

•Lead the technical implementation of our platform using Python for new and existing clients

•Collaborate with internal teams to deliver seamless onboarding experiences for our clients.

•Translate business use-cases into scalable, secure, and maintainable technical solutions

•Own the technical relationship with clients, acting as a trusted advisor throughout the implementation lifecycle

•Conduct code reviews, enforce best practices, and ensure high-quality deliverables

•Mentor developers and contribute to internal tooling and automation

•Troubleshoot and resolve complex integration issues across cloud environments


What you will need to thrive:

•8+ years of software development experience.

•Strong proficiency in Python (preferred), C# .NET, or C/C++.

•Proven track record of leading technical implementations for enterprise clients.

•Deep understanding of Pandas, Numpy, RESTful APIs, webhooks, and data transformation pipelines.

•Excellent communication and stakeholder management skills.

Read more
This is a Full time role is with our Client

This is a Full time role is with our Client

Agency job
via eTalent Services by JaiPrakash Bharti
Tiruchirappalli, Chennai
5 - 10 yrs
₹15L - ₹20L / yr
skill iconPython
Google Cloud Platform (GCP)
Bigquery
skill iconDocker
Data Engineer
+1 more

Senior Data Engineer

Experience: 5+ years

Chennai/ Trichy (Hybrid)


Type: Fulltime

 

Skills: GCP + Airflow + Bigquery + Python + Docker

 

The Role

As a Senior Data Engineer, you own new initiatives, design and build world-class platforms to measure and optimize ad performance. You ensure industry-leading scalability and reliability of mission-critical systems processing billions of real-time transactions a day. You apply state-of-the-art technologies, frameworks, and strategies to address complex challenges with Big Data processing and analytics. You work closely with the talented engineers across different time zones in building industry-first solutions to measure and optimize ad performance.

 

What you’ll do

● Write solid code with a focus on high performance for services supporting high throughput and low latency

● Architect, design, and build big data processing platforms handling tens of TBs/Day, serve thousands of clients, and support advanced analytic workloads

● Providing meaningful and relevant feedback to junior developers and staying up-to-date with system changes

● Explore the technological landscape for new ways of producing, processing, and analyzing data to gain insights into both our users and our product features

● Design, develop, and test data-driven products, features, and APIs that scale

● Continuously improve the quality of deliverables and SDLC processes

● Operate production environments, investigate issues, assess their impact, and develop feasible solutions.

● Understand business needs and work with product owners to establish priorities 

● Bridge the gap between Business / Product requirements and technical details

● Work in multi-functional agile teams with end-to-end responsibility for product development and delivery

 

Who you are

● 3-5+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python or a related language

● Love what you do and are passionate about crafting clean code, and have a steady foundation.

● Deep understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Airflow, BigQuery, Spark, Kafka Eco System ( Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale.

● Excellent SQL, DBT query writing abilities, and data understanding

● Care about agile software processes, data-driven development, reliability, and responsible experimentation 

● Genuine desire to automate decision-making, processes, and workflows

● Experience working with orchestration tools like Airflow

● Good understanding of semantic layers and experience in tools like LookerML, Kube

● Excellent communication skills and a team player

● Google BigQuery or Snowflake

● Cloud environment, Google Cloud Platform 

● Container technologies - Docker / Kubernetes

● Ad-serving technologies and standards 

● Familiarity with AI tools like Cursor AI, CoPilot.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Global Leader in Diversified Electronics

Global Leader in Diversified Electronics

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
5 - 10 yrs
₹20L - ₹48L / yr
skill iconPython
skill iconDeep Learning
skill iconMachine Learning (ML)
Algorithm Development
Image Processing
+3 more


JOB DESCRIPTION/PREFERRED QUALIFICATIONS:

REQUIRED SKILLS/COMPETENCIES:


Programming Languages:

  • Strong in Python, data structures, and algorithms.
  • Hands-on with NumPy, Pandas, Scikit-learn for ML prototyping.


Machine Learning Frameworks:

  • Understanding of supervised/unsupervised learning, regularization, feature engineering, model selection, cross-validation, ensemble methods (XGBoost, LightGBM).


Deep Learning Techniques:

  • Proficiency with PyTorch or TensorFlow/Keras
  • Knowledge of CNNs, RNNs, LSTMs, Transformers, Attention mechanisms.
  • Familiarity with optimization (Adam, SGD), dropout, batch norm.


LLMs & RAG:

  • Hugging Face Transformers (tokenizers, embeddings, model fine-tuning).
  • Vector databases (Milvus, FAISS, Pinecone, ElasticSearch).
  • Prompt engineering, function/tool calling, JSON schema outputs.


Data & Tools:

  • SQL fundamentals; exposure to data wrangling and pipelines.
  • Git/GitHub, Jupyter, basic Docker.


WHAT ARE WE LOOKING FOR?

  • Solid academic foundation with strong applied ML/DL exposure.
  • Curiosity to learn cutting-edge AI and willingness to experiment.
  • Clear communicator who can explain ML/LLM trade-offs simply.
  • Strong problem-solving and ownership mindset.


MINIMUM QUALIFICATIONS:

  • Doctorate (Academic) Degree and 2 years related work experience; Master's Level Degree and related work experience of 5 years; Bachelor's Level Degree and related work experience of 7 years in building AI systems/solutions with Machine Learning, Deep Learning, and LLMs.


MUST-HAVES:

  • Education/qualification:  Preferably from premier Institute like IIT, IISC, IIIT, NIT and BITS. Also regional tier 1 colleges.


  • Doctorate (Academic) Degree and 2 years related work experience; or Master's Level Degree and related work experience of 5 years; or Bachelor's Level Degree and related work experience of 7 years


  • Min 5 yrs experience in the Mandatory Skills: Python, Deep Learning, Machine Learning, Algorithm Development and Image Processing


  • 3.5 to 4 yrs proficiency with PyTorch or TensorFlow/Keras


  • Candidates from engineering product companies have higher chances of getting shortlisted (current company or past experience)


QUESTIONNAIRE: 

Do you have at least 5 years of experience with Python, Deep Learning, Machine Learning, Algorithm Development, and Image Processing? Please mention the skills and years of experience:


Do you have experience with PyTorch or TensorFlow / Keras?

  • PyTorch
  • TensorFlow / Keras
  • Both


How many years of experience do you have with PyTorch or TensorFlow / Keras?

  • Less than 3 years
  • 3 to 3.5 years
  • 3.5 to 4 years
  • More than 4 years


Is the candidate willing to relocate to Chennai?

  • Ready to relocate
  • Based in Chennai


What type of company have you worked for in your career?

  • Service-based IT company
  • Product company
  • Semiconductor company
  • Hardware manufacturing company
  • None of the above
Read more
 Global Leader in Diversified Electronics

Global Leader in Diversified Electronics

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai
7 - 16 yrs
₹30L - ₹65L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Algorithms
skill iconPython
skill iconC++
+10 more

JOB DESCRIPTION/PREFERRED QUALIFICATIONS:

KEY RESPONSIBILITIES: 

  • Lead and mentor a team of algorithm engineers, providing guidance and support to ensure their professional growth and success. 
  • Develop and maintain the infrastructure required for the deployment and execution of algorithms at scale. 
  • Collaborate with data scientists, software engineers, and product managers to design and implement robust and scalable algorithmic solutions. 
  • Optimize algorithm performance and resource utilization to meet business objectives.
  • Stay up to date with the latest advancements in algorithm engineering and infrastructure technologies and apply them to improve our systems.
  • Drive continuous improvement in development processes, tools, and methodologies. 


QUALIFICATIONS: 

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 
  • Proven experience in developing computer vision and image processing algorithms and ML/DL algorithms. 
  • Familiar with high performance computing, parallel programming and distributed systems.
  • Strong leadership and team management skills, with a track record of successfully leading engineering teams. 
  • Proficiency in programming languages such as Python, C++ and CUDA. 
  • Excellent problem-solving and analytical skills. 
  • Strong communication and collaboration abilities. 


PREFERRED QUALIFICATIONS: 

  • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn). 
  • Experience with GPU architecture and algo development toolkits like Docker, Apptainer. 


MINIMUM QUALIFICATIONS: 

  • Bachelor's degree plus 8 + years of experience 
  • Master's degree plus 8 + years of experience 
  • Familiar with high performance computing, parallel programming and distributed systems.


MUST-HAVE SKILLS: 

  • Phd with 6 yrs industry exp or M.Tech + 8 yrs experience or B.Tech + 10 yrs experience.
  • 14 yrs exp if an IC role.
  • Minimum 1 yrs experience working as a Manager/Lead
  • 8 years' experience in any of the programming languages such as Python/C++/CUDA.
  • 8 years' experience in Machine learning, Artificial intelligence, Deep learning.
  • 2 to 3 years exp in Image processing & Computer vision is a MUST
  • Product / Semi-conductor / Hardware Manufacturing company experience is a MUST. Candidates should be from engineering product companies 
  • Candidates from Tier 1 colleges like (IIT, IIIT, VIT, NIT) (Preferred)
  • Relocation to Chennai is mandatory


NICE TO HAVE SKILLS: 

  • Candidates from Semicon or manufacturing companies
  • Candidates with more than 8 CPGA



Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Hyderabad, Noida, Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Kochi (Cochin), Kolkata, Calcutta, Pune, Thiruvananthapuram, Trivandrum
7 - 15 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data Lake

SENIOR DATA ENGINEER:

ROLE SUMMARY:

Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.



RESPONSIBILITIES:

  • Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
  • Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
  • Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
  • Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
  • DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
  • Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
  • Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
  • Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
  • Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
  • Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.



REQUIRED QUALIFICATIONS:

  • Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
  •  Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
  • Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
  • ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
  • Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
  • DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
  • Serverless and events: Design event-driven distributed systems on AWS.
  • NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
  • AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.



NICE-TO-HAVE QUALIFICATIONS:

  • Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
  • Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
  • Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
  • Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.



OUTCOMES AND MEASURES:

  • Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
  • Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
  • Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
  • Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
  • Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.



LOCATION AND SCHEDULE:

●      Location: Outside US (OUS).

●      Schedule: Minimum 6 hours of overlap with US time zones.

Read more
Tata Consultancy Services
Bengaluru (Bangalore), Hyderabad, Pune, Delhi, Kolkata, Chennai
5 - 8 yrs
₹7L - ₹30L / yr
skill iconScala
skill iconPython
PySpark
Apache Hive
Spark
+3 more

Skills and competencies:

Required:

·        Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance

Data and macro-economic data to solve business problems.

·        Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in

Credit Risk/Banking

·        Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.

  • Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
  • Experience in systems integration, web services, batch processing
  • Experience in migrating codes to PySpark/Scala is big Plus
  • The ability to act as liaison conveying information needs of the business to IT and data constraints to the business

applies equal conveyance regarding business strategy and IT strategy, business processes and work flow

·        Flexibility in approach and thought process

·        Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Mumbai, Chennai
1 - 3 yrs
₹5L - ₹8L / yr
skill iconPython
SQL
Data Structures
ETL
Dashboard
+3 more

About Us:

PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.


Role Overview:

Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.


Responsibilities:

  • Assist in building and maintaining ETL/ELT pipelines for data processing
  • Write SQL queries to extract and analyze data from various sources
  • Support data quality checks and basic data validation processes
  • Create simple dashboards and reports using visualization tools
  • Learn and work with Oracle Cloud services under guidance
  • Use Python for basic data manipulation and cleaning tasks
  • Document data processes and maintain data dictionaries
  • Collaborate with team members to understand data requirements
  • Participate in troubleshooting data issues with senior support
  • Contribute to data migration tasks as needed


Qualifications:

Required:

  • Bachelor's degree in Computer Science, Information Systems, or related field
  • around 2 years of experience in data engineering or related field
  • Strong SQL knowledge and database concepts
  • Comfortable with Python programming
  • Understanding of data structures and ETL concepts
  • Problem-solving mindset and attention to detail
  • Good communication skills
  • Willingness to learn cloud technologies


Preferred:

  • Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
  • Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
  • Experience with Pandas for data manipulation
  • Understanding of data warehousing concepts
  • Familiarity with version control (Git)
  • Academic projects or internships involving data processing


Nice-to-Have:

  • Knowledge of dbt, BigQuery, or Snowflake
  • Exposure to big data concepts
  • Experience with Jupyter notebooks
  • Comfort with AI-assisted coding tools (Copilot, GPTs)
  • Personal projects showcasing data work


What We Offer:

  • Mentorship from senior data engineers
  • Hands-on learning with modern data stack
  • Access to paid AI tools and learning resources
  • Clear growth path to mid-level engineer
  • Direct impact on product and data strategy
  • No unnecessary meetings — focused execution
  • Strong engineering culture with continuous learning opportunities
Read more
IT MNC

IT MNC

Agency job
via FIRST CAREER CENTRE by Aisha Fcc
Bengaluru (Bangalore), Noida, Hyderabad, Pune, Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconJavascript
frappe

Development and Customization:


Build and customize Frappe modules to meet business requirements.


Develop new functionalities and troubleshoot issues in ERPNext applications.


Integrate third-party APIs for seamless interoperability.


Technical Support:


Provide technical support to end-users and resolve system issues.


Maintain technical documentation for implementations.


Collaboration:


Work with teams to gather requirements and recommend solutions.


Participate in code reviews for quality standards.


Continuous Improvement:


Stay updated with Frappe developments and optimize application performance.


Skills Required:

Proficiency in Python, JavaScript, and relational databases.


Knowledge of Frappe/ERPNext framework and object-oriented programming.


Experience with Git for version control.


Strong analytical skill

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai, Mumbai
4 - 6 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
NOSQL Databases
Data architecture
Data modeling
+7 more

Role Overview:

We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.


Responsibilities:

  • Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
  • Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
  • Define data governance policies and procedures to ensure data quality, security, and compliance.
  • Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
  • Develop and execute data migration strategies to Oracle Cloud.
  • Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
  • Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
  • Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
  • Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
  • Ensure the performance and reliability of data visualization dashboards and reports.
  • Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
  • Troubleshoot data-related issues and provide timely resolutions.
  • Document data architectures, data flows, and data visualization solutions.
  • Participate in the evaluation and selection of new data technologies and tools.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
  • Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role. 

  • Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
  • Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
  • Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
  • Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
  • Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
  • Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights. 
  • Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences. 
  • Experience with data governance and data quality principles.
  • Familiarity with agile development methodologies.
  • Ability to work independently and collaboratively within a team environment.

Application Link- https://forms.gle/km7n2WipJhC2Lj2r5

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Mumbai, Pune, Noida
4 - 6 yrs
₹3L - ₹21L / yr
AWS Data Engineer
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
databricks
+1 more

 Key Responsibilities

  • Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
  • Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
  • Perform data wrangling, cleansing, and transformation using Python and SQL
  • Collaborate with data scientists to integrate Generative AI models into analytics workflows
  • Build dashboards and reports to visualize insights using tools like Power BI or Tableau
  • Ensure data quality, governance, and security across all data assets
  • Optimize performance of data pipelines and troubleshoot bottlenecks
  • Work closely with stakeholders to understand data requirements and deliver actionable insights

🧪 Required Skills

Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker

📚 Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, or related field
  • 3+ years of experience in data engineering or data analytics
  • Hands-on experience with Databricks, PySpark, and AWS
  • Familiarity with Generative AI tools and frameworks is a strong plus
  • Strong problem-solving and communication skills

🌟 Preferred Traits

  • Analytical mindset with attention to detail
  • Passion for data and emerging technologies
  • Ability to work independently and in cross-functional teams
  • Eagerness to learn and adapt in a fast-paced environment


Read more
DATAMARK, Inc. BPO Services Pvt Ltd

at DATAMARK, Inc. BPO Services Pvt Ltd

2 candid answers
1 video
Balakumar G
Posted by Balakumar G
Chennai
9 - 16 yrs
₹16L - ₹22L / yr
skill iconC#
SQL
DevOps
skill iconReact.js
ASP.NET MVC
+15 more

Technical Lead

The ideal candidate should possess the following qualifications:

  • Education: Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • Experience: 9+ years in software development with a proven track record of delivering scalable applications.
  • Leadership Skills: 4+ years of experience in a technical leadership role, demonstrating strong mentoring abilities.
  • Technical Lead must Lead and mentor a team of software developers and validation engineers.
  • Technical Skills: Technical Lead must have Proficiency in programming languages such as C#, React js, SQL, MySQL, Javascript, Web API are required .NET, or Python, along with frameworks and tools used in software development.
  • Technical Lead must have General working knowledge of Selenium to support current business automation tools and future automation requirements.
  • General working knowledge of PHP desired to support current legacy applications which are on the roadmap for future modernization.
  • Technical Lead must have Strong understanding of software development lifecycle (SDLC).
  • Experience with agile methodologies (Scrum/Kanban or similar).
  • Knowledge of version control systems (Git or similar).
  • Development Methodologies: Experience with Agile development methodologies and experience with CI/CD pipelines.
  • Problem-Solving Skills: Strong analytical and problem-solving abilities that enable the identification of complex technical issues.
  • Collaboration: Excellent communication and collaboration skills, with the ability to work effectively within a team environment.
  • Innovation: A passion for technology and innovation, with a keen interest in exploring new technologies to find the best solutions.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Chennai
5 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconJava
Basic Qualifications : ● Experience: 4+ years. ●...
Immediate joiner

Basic Qualifications :

● Experience: 4+ years.

● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.

● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)

● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.

● Passion for software engineering and following the best coding concepts.

● Good to great problem solving and communication skills.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Chennai
4 - 8 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconJava
Basic Qualifications : ● Experience: 4+ years. ●...
Immediate joiner

Basic Qualifications :

● Experience: 4+ years.

● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.

● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)

● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.

● Passion for software engineering and following the best coding concepts.

● Good to great problem solving and communication skills.

 

Nice to have Qualifications :

● Product and customer-centric mindset.

● Great OO skills, including design patterns.

● Experience with devops, continuous integration & deployment.

● Exposure to big data technologies, Machine Learning and NLP will be a plus.

Read more
Umanist India
Chennai
7 - 8 yrs
₹21L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconMachine Learning (ML)
skill iconPython

Job Title: Software Engineer Consultant/Expert 34192 

Location: Chennai

Work Type: Onsite

Notice Period: Immediate Joiners only or serving candidates upto 30 days.

 

Position Description:

  • Candidate with strong Python experience.
  • Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
  • This is a Tech Anchor role.

Experience Required:

  • 7 Plus Years
Read more
Umanist India
Prince Tiwari
Posted by Prince Tiwari
Chennai
5 - 6 yrs
₹20L - ₹21L / yr
skill iconAngularJS (1.x)
skill iconReact.js
skill iconPython
skill iconJava
skill iconSpring Boot

Key Responsibilities: 34249 

  • Feature Development: Design, develop, and maintain new features and enhancements across the stack.
  • Front-End: Build intuitive, responsive UIs using Angular or React.
  • Back-End: Develop scalable APIs and services using Python (preferred), Java/Spring, or Node.js.
  • Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP) — familiarity with services like App Engine, Cloud Functions, Kubernetes is expected.
  • Performance Tuning: Identify and optimize performance bottlenecks.
  • Code Quality: Participate in code reviews and maintain high standards through unit testing and automation.
  • DevOps & CI/CD: Collaborate on deployment pipelines using Tekton, Terraform, and other DevOps tools.
  • Cross-Functional Collaboration: Work closely with Product Managers, UI/UX Designers, and fellow Engineers in an agile environment.

Must-Have Skills:

  • Strong development expertise in Python (preferred), Angular, and GCP
  • Understanding of DevOps practices
  • Experience with SDLC, agile methodologies, and unit testing

Good to Have (Nice-to-Haves):

  • Hands-on experience with:
  • Tekton, Terraform, CI/CD pipelines
  • Large Language Models (LLMs) integration
  • AWS/Azure (in addition to GCP)
  • Contributions to open-source projects
  • Familiarity with API design and microservices architecture

Educational Qualification:

  • Required: Bachelor’s Degree in Computer Science, Engineering, or related discipline




Read more
Deqode

at Deqode

1 recruiter
Sneha Jain
Posted by Sneha Jain
Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Indore, Jaipur, Kolkata, Chennai, Bengaluru (Bangalore)
3.5 - 7 yrs
₹8L - ₹13L / yr
AWS Lambda
skill iconPython
Microservices
Amazon EC2

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.


Key Responsibilities:

  • Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
  • Design and deploy serverless applications using AWS Lambda and API Gateway.
  • Build and manage RESTful APIs and microservices.
  • Implement CI/CD pipelines for efficient and secure deployments.
  • Work with Docker to containerize applications and manage container lifecycles.
  • Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
  • Design efficient database schemas and write optimized SQL queries for PostgreSQL.
  • Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
  • Write unit, integration, and performance tests to ensure code reliability and robustness.
  • Monitor, troubleshoot, and optimize application performance in production environments.


Required Skills:

  • Strong proficiency in Python and Python-based web frameworks.
  • Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
  • Sound knowledge of microservices architecture and asynchronous programming.
  • Proficiency with PostgreSQL, including schema design and query optimization.
  • Hands-on experience with Docker and containerized deployments.
  • Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
  • Familiarity with API documentation tools (Swagger/OpenAPI).
  • Version control with Git.


Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
4 - 8 yrs
₹20L - ₹30L / yr
ai/ml
TensorFlow
skill iconPython
Google Cloud Platform (GCP)
Vertex

·                    Design, develop, and implement AI/ML models and algorithms.

·                    Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.

·                    Write clean, efficient, and well-documented code.

·                    Collaborate with data engineers to ensure data quality and availability for model training and evaluation.

·                    Work closely with senior team members to understand project requirements and contribute to technical solutions.

·                    Troubleshoot and debug AI/ML models and applications.

·                    Stay up-to-date with the latest advancements in AI/ML.

·                    Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.

·                    Develop and deploy AI solutions on Google Cloud Platform (GCP).

·                    Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.

·                    Utilize Vertex AI for model training, deployment, and management.

·                    Integrate and leverage Google Gemini for specific AI functionalities.

Qualifications:

·                    Bachelor’s degree in computer science, Artificial Intelligence, or a related field.

·                    3+ years of experience in developing and implementing AI/ML models.

·                    Strong programming skills in Python.

·                    Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.

·                    Good understanding of machine learning concepts and techniques.

·                    Ability to work independently and as part of a team.

·                    Strong problem-solving skills.

·                    Good communication skills.

·                    Experience with Google Cloud Platform (GCP) is preferred.

·                    Familiarity with Vertex AI is a plus.


Read more
Klenty

at Klenty

2 recruiters
Klenty Ramya
Posted by Klenty Ramya
Chennai
3 - 5 yrs
₹10L - ₹16L / yr
skill iconMongoDB
skill iconExpress
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
+5 more
  • Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
  • Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues 
  • Develop and maintain APIs and Services in Node.js/Python 
  • Develop and maintain web-based UI’s using front-end frameworks 
  • Participate in code reviews, unit testing and integration testing 
  • Participate in the full software development lifecycle, from concept and design to implementation and support 
  • Ensure application performance, scalability, and security through best practices in coding, testing and deployment 
  • Collaborate with DevOps team for troubleshooting deployment issues 

 

Qualification 

● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration 

● Proficiency in Node.js, Typescript, React, Express framework 

● In-depth knowledge of databases such as MongoDB 

● Proficient in HTML5, CSS3, and responsive UI design 

● Proficiency in any Python development framework is a plus 

● Strong direct experience in functional and object oriented programming using Javascript 

● Experience with cloud platforms (Azure preferred) 

● Microservices architecture and containerization 

● Expertise in performance monitoring, tuning, and optimization 

● Understanding of DevOps practices for automated deployments 

● Understanding of software design patterns and best practices 

● Practical experience working in Agile developments (scrum) 

● Excellent critical thinking skills and the ability to mentor junior team members 

● Effectively communicate and collaborate with cross-functional teams 

● Strong capability to work independently and deliver results within tight deadlines 

● Strong problem-solving abilities and attention to detail

Read more
Chennai based

Chennai based

Agency job
via Girmiti Software by Deric John
Chennai
5 - 6 yrs
₹7L - ₹14L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconJava

Proficient in Golang, Python, Java, C++, or Ruby (at least one)

Strong grasp of system design, data structures, and algorithms

Experience with RESTful APIs, relational and NoSQL databases

Proven ability to mentor developers and drive quality delivery

Track record of building high-performance, scalable systems

Excellent communication and problem-solving skills

Experience in consulting or contractor roles is a plus

Read more
HappyFox

at HappyFox

1 video
6 products
Sharon Samuel
Posted by Sharon Samuel
Chennai
2 - 5 yrs
₹9L - ₹15L / yr
Test Automation (QA)
Manual testing
skill iconPython
skill iconJavascript
skill iconJava

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.


Responsibilities:

Ensure the quality of product feature development.

Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.

Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.

Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.

Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.

Collaboration: Participate in project planning and execution with the team for efficient project delivery.


Requirements:

A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.

2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.

A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.

Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).

Good team player with a proactive approach to continuous learning.

Sound understanding of the Agile software development methodology.

Experience in a SaaS-based product company or a fast-paced startup environment is a plus.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort