Cutshort logo
Python Jobs in Chennai

50+ Python Jobs in Chennai | Python Job openings in Chennai

Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Amwhiz
Aruljothi Kuppusamy
Posted by Aruljothi Kuppusamy
Chennai
2 - 4 yrs
₹4.5L - ₹12L / yr
Artificial Intelligence (AI)
Large Language Models (LLM)
agentic
skill iconPython
Databases
+2 more

Company Overview:

We are a small, dynamic, and growing AI-focused company delivering cutting-edge AI solutions to our clients. We specialize in implementing real-world applications using Large Language Models (LLMs), Small Language Models (SLMs), Retrieval-Augmented Generation (RAG), Agentic AI systems, and cloud-native services across AWS, Azure, and GCP.


Job Summary:

We are looking for a passionate AI Developer & Solution Provider who thrives on solving real client problems using AI. The ideal candidate will have solid experience with Python, language models, chatbot development, and building scalable solutions using cloud services and modern databases. You will work directly with clients to understand their needs and deliver end-to-end AI-powered solutions.


Key Responsibilities:

  • Design, build, and deploy AI/ML solutions using LLMs (e.g., GPT-4, Claude), SLMs, and open-source models.
  • Develop and fine-tune chatbots, intelligent agents, and Agentic AI workflows for various client use-cases.
  • Implement Retrieval-Augmented Generation (RAG) pipelines to enhance LLM capabilities.
  • Build secure, scalable backends using Python and integrate AI components with APIs, databases, and cloud systems.
  • Understand client requirements and translate them into technically sound and scalable AI solutions.
  • Deploy applications and services using cloud platforms (AWS, Azure, GCP).
  • Work with structured and unstructured data, including setting up and managing databases (SQL, NoSQL).
  • Stay updated with the latest trends in AI/ML and help bring innovative solutions to the team and clients.
  • Document solutions and provide technical support during client handover and implementation.

Required Skills and Experience:

  • Strong Python programming experience, especially for AI/ML development.
  • Hands-on experience with LLMs, SLMs, OpenAI, Hugging Face, etc.
  • Practical knowledge of RAG, vector databases (e.g., FAISSPineconeWeaviateChroma).
  • Experience with Agentic AI systems, frameworks (e.g., LangChain, AutoGen, CrewAI), and chatbot development.
  • Good understanding of databases – both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis).
  • Solid experience with cloud platforms: AWS (SageMaker, Lambda), Azure (OpenAI, ML Studio), GCP (Vertex AI, Functions).
  • Comfortable interacting with clients and delivering tailored AI solutions.
  • Excellent problem-solving and communication skills.

Nice to Have:

  • Experience with fine-tuning/custom training of models.
  • Experience in deploying scalable APIs (e.g., FastAPI, Flask).
  • Background in data engineering or MLOps.
  • Familiarity with DevOps, Docker, Kubernetes is a plus.

Why Join Us?

  • Opportunity to work on cutting-edge AI solutions with real-world impact.
  • Collaborative and flexible work culture.
  • Direct exposure to diverse clients and industries.
  • Fast-paced environment with learning and growth opportunities.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Chennai
5 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconJava
Basic Qualifications : ● Experience: 4+ years. ●...
Immediate joiner

Basic Qualifications :

● Experience: 4+ years.

● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.

● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)

● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.

● Passion for software engineering and following the best coding concepts.

● Good to great problem solving and communication skills.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Chennai
4 - 8 yrs
₹12L - ₹20L / yr
skill iconPython
skill iconJava
Basic Qualifications : ● Experience: 4+ years. ●...
Immediate joiner

Basic Qualifications :

● Experience: 4+ years.

● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.

● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)

● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.

● Passion for software engineering and following the best coding concepts.

● Good to great problem solving and communication skills.

 

Nice to have Qualifications :

● Product and customer-centric mindset.

● Great OO skills, including design patterns.

● Experience with devops, continuous integration & deployment.

● Exposure to big data technologies, Machine Learning and NLP will be a plus.

Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
5 - 8 yrs
₹15L - ₹30L / yr
skill iconReact.js
skill iconAngular (2+)
skill iconNodeJS (Node.js)
skill iconPython
Large Language Models (LLM)
+4 more

About Moative

Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.


Work you’ll do

As a Full Stack AI Developer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of users by transforming how services are delivered and experienced. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.


Responsibilities

  • Design and develop AI powered applications, including intuitive user interfaces at the front-end and robust back-end of web applications.
  • Develop full-stack features in cloud-native environments with proficiency in HTML, CSS, JavaScript (popular frameworks such as React or Angular) as well as server-side languages such as Node.js and Python. 
  • Utilize and adapt foundational models and mulit-modal LLMs (voice, image, text) as the core building blocks for developing impactful products aimed at improving service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
  • Architect, build, and deploy intelligent agentic AI workflows that automate and optimize key processes. You will be involved in the full lifecycle from conceptualization and design to implementation and monitoring.
  • Contribute directly to enhancing our model evaluation, monitoring and fine-tuning methodologies to ensure robust and reliable system performance. 
  • Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions


Who you are

You are a passionate and results-oriented engineer who is driven by the potential of AI to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration. 


You have experience in developing and deploying real-world applications, including ensuring integration between user interfaces and server-side logic. You are excellent at writing clean, efficient code for both front-end and backend, and have strong ability to conduct regular testing and debugging to ensure optimal performance and a smooth user experience. You are excited at the possibility of embedding AI technologies to develop intelligent applications that can directly impact business services and make a positive difference to users.


Skills & Requirements

  • 3+ years of experience in full stack development
  • Proficiency in HTML, CSS, JavaScript (popular frameworks such as React or Angular).
  • Strong knowledge of server-side languages such as Node.js and Python.
  • Experience with databases like MySQL, PostgreSQL, or MongoDB.
  • Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
  • Familiarity with version control systems, especially Git.
  • 1 year experience with AI models and tools, including deploying multi-modal LLMs (text, images, voice) as part of business applications.
  • Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
  • Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Read more
Umanist India
Chennai
7 - 8 yrs
₹21L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconMachine Learning (ML)
skill iconPython

Job Title: Software Engineer Consultant/Expert 34192 

Location: Chennai

Work Type: Onsite

Notice Period: Immediate Joiners only or serving candidates upto 30 days.

 

Position Description:

  • Candidate with strong Python experience.
  • Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
  • This is a Tech Anchor role.

Experience Required:

  • 7 Plus Years
Read more
Umanist India
Prince Tiwari
Posted by Prince Tiwari
Chennai
5 - 6 yrs
₹20L - ₹21L / yr
skill iconAngularJS (1.x)
skill iconReact.js
skill iconPython
skill iconJava
skill iconSpring Boot

Key Responsibilities: 34249 

  • Feature Development: Design, develop, and maintain new features and enhancements across the stack.
  • Front-End: Build intuitive, responsive UIs using Angular or React.
  • Back-End: Develop scalable APIs and services using Python (preferred), Java/Spring, or Node.js.
  • Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP) — familiarity with services like App Engine, Cloud Functions, Kubernetes is expected.
  • Performance Tuning: Identify and optimize performance bottlenecks.
  • Code Quality: Participate in code reviews and maintain high standards through unit testing and automation.
  • DevOps & CI/CD: Collaborate on deployment pipelines using Tekton, Terraform, and other DevOps tools.
  • Cross-Functional Collaboration: Work closely with Product Managers, UI/UX Designers, and fellow Engineers in an agile environment.

Must-Have Skills:

  • Strong development expertise in Python (preferred), Angular, and GCP
  • Understanding of DevOps practices
  • Experience with SDLC, agile methodologies, and unit testing

Good to Have (Nice-to-Haves):

  • Hands-on experience with:
  • Tekton, Terraform, CI/CD pipelines
  • Large Language Models (LLMs) integration
  • AWS/Azure (in addition to GCP)
  • Contributions to open-source projects
  • Familiarity with API design and microservices architecture

Educational Qualification:

  • Required: Bachelor’s Degree in Computer Science, Engineering, or related discipline




Read more
Appiness Interactive Pvt. Ltd.
S Suriya Kumar
Posted by S Suriya Kumar
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Indore, Gurugram, Delhi, Ahmedabad, Jaipur
5 - 8 yrs
₹5L - ₹25L / yr
skill iconPython
skill iconR Programming
skill iconJava
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+8 more

Company Description

Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that

specializes in digital services for startups to fortune-500s. We work closely with our clients to

create a comprehensive soul for their brand in the online world, engaged through multiple

platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think

out of the box or tread the un-trodden path in order to deliver the best results for our clients.

We pride ourselves on Practical Creativity where the idea is only as good as the returns it

fetches for our clients.


Key Responsibilities:

  • Design and implement advanced AI/ML models and algorithms to address real-world challenges.
  • Analyze large and complex datasets to derive actionable insights and train predictive models.
  • Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
  • Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
  • Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
  • Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
  • Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
  • Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.


Required Skills:

  • Strong programming skills in Python (preferred); experience with R or Java is also valuable.
  • Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
  • Hands-on experience with cloud platforms like AWS, Azure, or GCP.
  • Solid foundation in data structures, algorithms, statistics, and machine learning principles.
  • Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
  • Proven experience in deploying and maintaining AI/ML models in production environments.
  • Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.
Read more
Deqode

at Deqode

1 recruiter
Sneha Jain
Posted by Sneha Jain
Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Indore, Jaipur, Kolkata, Chennai, Bengaluru (Bangalore)
3.5 - 7 yrs
₹8L - ₹13L / yr
AWS Lambda
skill iconPython
Microservices
Amazon EC2

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.


Key Responsibilities:

  • Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
  • Design and deploy serverless applications using AWS Lambda and API Gateway.
  • Build and manage RESTful APIs and microservices.
  • Implement CI/CD pipelines for efficient and secure deployments.
  • Work with Docker to containerize applications and manage container lifecycles.
  • Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
  • Design efficient database schemas and write optimized SQL queries for PostgreSQL.
  • Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
  • Write unit, integration, and performance tests to ensure code reliability and robustness.
  • Monitor, troubleshoot, and optimize application performance in production environments.


Required Skills:

  • Strong proficiency in Python and Python-based web frameworks.
  • Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
  • Sound knowledge of microservices architecture and asynchronous programming.
  • Proficiency with PostgreSQL, including schema design and query optimization.
  • Hands-on experience with Docker and containerized deployments.
  • Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
  • Familiarity with API documentation tools (Swagger/OpenAPI).
  • Version control with Git.


Read more
Greatify
Ciline Sanjanyaa
Posted by Ciline Sanjanyaa
Chennai
2 - 5 yrs
₹4L - ₹10L / yr
Playwright
skill iconPython

ABOUT THE JOB: 

Job Title: QA Automation Specialist 

Location: Teynampet, Chennai 

Job Type: Full-time 

Company: Gigadesk Technologies Pvt. Ltd. [Greatify.ai] 


COMPANY DESCRIPTION:

At Greatify.ai, we are transforming educational institutions with cutting-edge AI-powered solutions. Our platform acts as a smart operating system for colleges, schools, and universities—enhancing learning, streamlining operations, and maximizing efficiency.

With 100+ institutions served, 100,000+ students impacted globally, and 1,000+ educators empowered, we are redefining the future of education. 


COMPANY WEBSITE: https://www.greatify.ai/


JOB DESCRIPTION:

As a QA Automation Specialist at Greatify, you will be responsible for designing, building, and maintaining robust automated test frameworks and suites covering UI, API, integration, regression, and performance tests for our ed‑tech platforms. As part of an Agile, cross‑functional team, you’ll integrate automation into our CI/CD pipelines to speed up release cycles while ensuring high product quality and reliability. Your role ensures consistent quality, provides actionable insights, and champions automation best practices across the QA function. 


KEY RESPONSIBILITIES:


1. Quality Assurance Strategy:

  • Develop and own QA strategy for EdTech product suites.
  • Work with Product and Engineering teams to define quality benchmarks and release criteria.
  • Ensure quality is embedded early in the software development lifecycle.

2. Test Planning & Execution:

  • Design, write, and execute test cases and scenarios—manual and automated.
  • Manage regression, integration, and exploratory testing.
  • Monitor test outcomes, identify risks, and mitigate issues.

3. Automation Framework Development

  • Develop scalable, maintainable automation frameworks using Playwright and Selenium, structured with Cucumber (BDD) for readable test specifications.
  • Write automation scripts in Python and Java, following best practices like modular design and Page Object Model

4. Bug Tracking and Reporting:

  • Log, triage, and track bugs using tools like Jira.
  • Generate insightful quality reports for stakeholders.

5. Usability and Functional Testing:

  • Evaluate UX across web/mobile platforms.
  • Support UX teams with accessibility and user satisfaction testing.

6. Collaboration and Mentoring:

  • Foster a strong QA culture with best practices and collaboration.


QUALIFICATIONS:

  1. Bachelor’s degree in Computer Science, Information Technology, or related field.
  2. 2+ years of QA experience with at least 2 years in automation testing.
  3. Proficiency in writing automation scripts using mainstream tools
  4. Experience in education tech systems.
  5. Hands-on knowledge of Agile/Scrum processes.
  6. Familiarity with programming languages Python and Java, using Playwright and Selenium for automation scripting, and employing JMeter or k6 with Grafana for performance testing.
  7. Experience setting up CI/CD pipelines via GitHub Actions and Jenkins, and managing test cases and execution tracking in ClickUp
  8. Experience with cross-browser and mobile automation is a plus.
  9. Strong problem-solving skills and attention to detail.
  10. Excellent communication and team collaboration skills.


Read more
Greatify
Ciline Sanjanyaa
Posted by Ciline Sanjanyaa
Chennai
2 - 5 yrs
₹3L - ₹9L / yr
Playwright
Selenium
skill iconPython
skill iconJava
Automation
+4 more

ABOUT THE JOB:

Job Title: QA Automation Specialist

Location: Teynampet, Chennai

Job Type: Full-time

Company: Gigadesk Technologies Pvt. Ltd. [Greatify.ai]


COMPANY DESCRIPTION:

At Greatify.ai, we are transforming educational institutions with cutting-edge AI-powered solutions. Our platform acts as a smart operating system for colleges, schools, and universities—enhancing learning, streamlining operations, and maximizing efficiency.

With 100+ institutions served, 100,000+ students impacted globally, and 1,000+ educators empowered, we are redefining the future of education.


COMPANY WEBSITE: https://www.greatify.ai/


JOB DESCRIPTION: 

As a QA Automation Specialist at Greatify, you will be responsible for designing, building, and maintaining robust automated test frameworks and suites covering UI, API, integration, regression, and performance tests for our ed‑tech platforms. As part of an Agile, cross‑functional team, you’ll integrate automation into our CI/CD pipelines to speed up release cycles while ensuring high product quality and reliability. Your role ensures consistent quality, provides actionable insights, and champions automation best practices across the QA function.


KEY RESPONSIBILITIES:

1.Quality Assurance Strategy:

  • Develop and own QA strategy for EdTech product suites.
  • Work with Product and Engineering teams to define quality benchmarks and release criteria.
  • Ensure quality is embedded early in the software development lifecycle.

2.Test Planning & Execution:

  • Design, write, and execute test cases and scenarios—manual and automated.
  • Manage regression, integration, and exploratory testing.
  • Monitor test outcomes, identify risks, and mitigate issues.

3.Automation Framework Development

  • Develop scalable, maintainable automation frameworks using Playwright and Selenium, structured with Cucumber (BDD) for readable test specifications.
  • Write automation scripts in Python and Java, following best practices like modular design and Page Object Model

4.Bug Tracking and Reporting:

  • Log, triage, and track bugs using tools like Jira.
  • Generate insightful quality reports for stakeholders.

5.Usability and Functional Testing:

  • Evaluate UX across web/mobile platforms.
  • Support UX teams with accessibility and user satisfaction testing.

6.Collaboration and Mentoring:

  • Foster a strong QA culture with best practices and collaboration. 


QUALIFICATIONS:

1. Bachelor’s degree in computer science, Information Technology, or related field.

2. 2+ years of QA experience with at least 2 years in automation testing. 3. Proficiency in writing automation scripts using mainstream tools

4. Experience in education tech systems.

5. Hands-on knowledge of Agile/Scrum processes.

6. Familiarity with programming languages Python and Java, using Playwright and Selenium for automation scripting, and employing JMeter or k6 with Grafana for performance testing.

7. Experience setting up CI/CD pipelines via GitHub Actions and Jenkins and managing test cases and execution tracking in Click Up

8. Experience with cross-browser and mobile automation is a plus.

9. Strong problem-solving skills and attention to detail.

10. Excellent communication and team collaboration skills. 

Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
3 - 5 yrs
₹10L - ₹25L / yr
skill iconPython
NumPy
pandas
Scikit-Learn
Natural Language Toolkit (NLTK)
+4 more

About Moative

Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.


Work you’ll do

As a Data Scientist at Moative, you’ll play a crucial role in extracting valuable insights from data to drive informed decision-making. You’ll work closely with cross-functional teams to build predictive models and develop solutions to complex business problems. You will also be involved in conducting experiments, building POCs and prototypes.


Responsibilities

  • Support end-to-end development and deployment of ML/ AI models - from data preparation, data analysis and feature engineering to model development, validation and deployment
  • Gather, prepare and analyze data, write code to develop and validate models, and continuously monitor and update them as needed.
  • Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
  • Document methodologies and results, present findings and communicate insights to non-technical audiences


Skills & Requirements

  • Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK). 
  • Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms 
  • Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
  • Broad understanding of data structures and data engineering.
  • Strong communication skills
  • Strong collaboration skills, continuous learning attitude and a problem solving mind-set


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.

Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
3 - 5 yrs
₹10L - ₹25L / yr
skill iconPython
PySpark
skill iconScala
Data engineering
ETL
+12 more

About Moative

Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.


Work you’ll do

As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.


Responsibilities

  • Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements
  • Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs


Who you are

You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration. 


You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.


Skills & Requirements

  • 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
  • Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
  • Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
  • Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
  • Experience with common relational SQL, NoSQL and Graph databases.
  • Strong experience with scripting languages: Python, PySpark, Scala, etc.
  • Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
  • Experience with big data tools (Spark, Kafka, etc) and stream processing.
  • Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
  • Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
  • Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
4 - 8 yrs
₹20L - ₹30L / yr
ai/ml
TensorFlow
skill iconPython
Google Cloud Platform (GCP)
Vertex

·                    Design, develop, and implement AI/ML models and algorithms.

·                    Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.

·                    Write clean, efficient, and well-documented code.

·                    Collaborate with data engineers to ensure data quality and availability for model training and evaluation.

·                    Work closely with senior team members to understand project requirements and contribute to technical solutions.

·                    Troubleshoot and debug AI/ML models and applications.

·                    Stay up-to-date with the latest advancements in AI/ML.

·                    Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.

·                    Develop and deploy AI solutions on Google Cloud Platform (GCP).

·                    Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.

·                    Utilize Vertex AI for model training, deployment, and management.

·                    Integrate and leverage Google Gemini for specific AI functionalities.

Qualifications:

·                    Bachelor’s degree in computer science, Artificial Intelligence, or a related field.

·                    3+ years of experience in developing and implementing AI/ML models.

·                    Strong programming skills in Python.

·                    Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.

·                    Good understanding of machine learning concepts and techniques.

·                    Ability to work independently and as part of a team.

·                    Strong problem-solving skills.

·                    Good communication skills.

·                    Experience with Google Cloud Platform (GCP) is preferred.

·                    Familiarity with Vertex AI is a plus.


Read more
Klenty

at Klenty

2 recruiters
Klenty Ramya
Posted by Klenty Ramya
Chennai
3 - 5 yrs
₹10L - ₹16L / yr
skill iconMongoDB
skill iconExpress
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
+5 more
  • Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
  • Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues 
  • Develop and maintain APIs and Services in Node.js/Python 
  • Develop and maintain web-based UI’s using front-end frameworks 
  • Participate in code reviews, unit testing and integration testing 
  • Participate in the full software development lifecycle, from concept and design to implementation and support 
  • Ensure application performance, scalability, and security through best practices in coding, testing and deployment 
  • Collaborate with DevOps team for troubleshooting deployment issues 

 

Qualification 

● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration 

● Proficiency in Node.js, Typescript, React, Express framework 

● In-depth knowledge of databases such as MongoDB 

● Proficient in HTML5, CSS3, and responsive UI design 

● Proficiency in any Python development framework is a plus 

● Strong direct experience in functional and object oriented programming using Javascript 

● Experience with cloud platforms (Azure preferred) 

● Microservices architecture and containerization 

● Expertise in performance monitoring, tuning, and optimization 

● Understanding of DevOps practices for automated deployments 

● Understanding of software design patterns and best practices 

● Practical experience working in Agile developments (scrum) 

● Excellent critical thinking skills and the ability to mentor junior team members 

● Effectively communicate and collaborate with cross-functional teams 

● Strong capability to work independently and deliver results within tight deadlines 

● Strong problem-solving abilities and attention to detail

Read more
Chennai based

Chennai based

Agency job
via Girmiti Software by Deric John
Chennai
5 - 6 yrs
₹7L - ₹14L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconJava

Proficient in Golang, Python, Java, C++, or Ruby (at least one)

Strong grasp of system design, data structures, and algorithms

Experience with RESTful APIs, relational and NoSQL databases

Proven ability to mentor developers and drive quality delivery

Track record of building high-performance, scalable systems

Excellent communication and problem-solving skills

Experience in consulting or contractor roles is a plus

Read more
HappyFox

at HappyFox

1 video
6 products
Sharon Samuel
Posted by Sharon Samuel
Chennai
2 - 5 yrs
₹9L - ₹15L / yr
Test Automation (QA)
Manual testing
skill iconPython
skill iconJavascript
skill iconJava

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.


Responsibilities:

Ensure the quality of product feature development.

Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.

Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.

Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.

Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.

Collaboration: Participate in project planning and execution with the team for efficient project delivery.


Requirements:

A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.

2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.

A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.

Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).

Good team player with a proactive approach to continuous learning.

Sound understanding of the Agile software development methodology.

Experience in a SaaS-based product company or a fast-paced startup environment is a plus.

Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Pune, Bengaluru (Bangalore), Gurugram, Chennai
4 - 8 yrs
₹7L - ₹26L / yr
SRE
Reliability engineering
skill iconAmazon Web Services (AWS)
skill iconPython

Job Title: Site Reliability Engineer (SRE)

Experience: 4+ Years

Work Location: Bangalore / Chennai / Pune / Gurgaon

Work Mode: Hybrid or Onsite (based on project need)

Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.


🛠️ Key Responsibilities

  • Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
  • Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
  • Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
  • Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
  • Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
  • Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
  • Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
  • Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.


Must-Have Skills

  • 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
  • Strong programming skills in Python (mandatory).
  • Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
  • Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
  • Strong background in Linux-based systems and shell scripting.
  • Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
  • Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
  • Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.


Read more
Chennai
4 - 5 yrs
₹1L - ₹14L / yr
skill iconPython
skill iconDjango
skill iconFlask
API
RESTful APIs
+4 more

At BigThinkCode, our technology solves complex problems. We are looking for a highly talented engineer to join our technology team at Chennai.


Our ideal candidate will have expert knowledge of software development processes, programming and problem-solving skills.


This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.


Below job description for your reference, if interested please share your profile to connect and discuss.


Company: BigThinkCode Technologies

URL: https://www.bigthinkcode.com/

Experience: 4 – 5 years

Location: Chennai (Hybrid)


Responsibilities:

·      Work closely as part of the tech team to build new features.

·      Collaborate with managers, designers, and engineers to deliver user-facing features

·      Write reusable code and build libraries for later use

·      Utilize knowledge of programming languages and the software ecosystem to accomplish goals.

·      Design software systems and supporting infrastructure

·      Contribute to the technical roadmap


Required skills:

·      Familiar with Algorithm, Data structures.

·      Expertise in OOPs concepts and its implementation.

·      Hands on real-time experience using Design Patterns, Testing & Debugging skills.

·      Familiar with Python programming language.

·      Experience in one or more python frameworks like Flask or Django, FastAPI

·      Conduct code reviews to ensure code quality.

·      Database hands on (Relational / NoSQL / ORMs) experience.

·      Nice to have deployment and Devops skills like Cloud and Docker.


Benefits:

·      Medical cover for employee and eligible dependents.

·      Tax beneficial salary structure.

·      Comprehensive leave policy

·      Competency development training programs.

Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
11 - 20 yrs
₹50L - ₹60L / yr
Generative AI
skill iconPython
TensorFlow
Google Cloud Platform (GCP)
POC

Job Title: AI Solutioning Architect – Healthcare IT

Role Summary:

The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).

Key Responsibilities:

  • Architect scalable AI solutions from data ingestion to deployment.
  • Align AI initiatives with business objectives and regulatory requirements (HIPAA).
  • Collaborate with cross-functional teams to deliver AI projects.
  • Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
  • Mentor technical teams and ensure best practices in MLOps.
  • Communicate complex concepts to diverse stakeholders.

Qualifications:

  • Bachelor’s/Master’s in Computer Science or related field.
  • 12+ years in software development/architecture with strong AI/ML focus.
  • Experience in healthcare IT and compliance (HIPAA).
  • Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
  • Hands-on with GCP (preferred) or other cloud platforms.
  • Strong leadership, problem-solving, and communication skills.


Read more
Hyderabad, Bengaluru (Bangalore), Mumbai, Delhi, Pune, Chennai
0 - 1 yrs
₹10L - ₹20L / yr
skill iconPython
Object Oriented Programming (OOPs)
skill iconJavascript
skill iconJava
Data Structures
+1 more


About NxtWave


NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.

Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.

Know more:

🌐 NxtWave | NIAT

About the Role

As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.


Key Responsibilities

  • Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
  • Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
  • Mentor students in academic, career, and project development goals.
  • Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
  • Drive research-led content development, and contribute to innovation in teaching methodologies.
  • Support capstone projects, hackathons, and collaborative research opportunities with industry.
  • Foster a high-performance learning environment in classes of 70–100 students.
  • Collaborate with cross-functional teams for continuous student development and program quality.
  • Actively participate in faculty training, peer reviews, and academic audits.


Eligibility & Requirements

  • Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
  • Strong academic and research orientation, preferably with publications or project contributions.
  • Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
  • A deep commitment to education, student success, and continuous improvement.

Must-Have Skills

  • Expertise in Python, Java, JavaScript, and advanced programming paradigms.
  • Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
  • Excellent communication, classroom delivery, and presentation skills.
  • Familiarity with academic content tools like Google Slides, Sheets, Docs.
  • Passion for educating, mentoring, and shaping future developers.

Good to Have

  • Industry experience or consulting background in software development or research-based roles.
  • Proficiency in version control systems (e.g., Git) and agile methodologies.
  • Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
  • A drive to innovate in teaching, curriculum design, and student engagement.

Why Join Us?

  • Be at the forefront of shaping India’s tech education revolution.
  • Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
  • Competitive compensation with strong growth potential.
  • Create impact at scale by mentoring hundreds of future-ready tech leaders.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Mumbai
4 - 12 yrs
₹3.5L - ₹37L / yr
skill iconPython
AIML

Job Summary:

We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.

Key Responsibilities:

  • Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
  • Perform data preprocessing, feature engineering, and exploratory data analysis.
  • Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
  • Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
  • Optimize model performance and ensure robustness in real-time environments.
  • Maintain clear documentation of code, models, and processes.

Required Skills:

  • Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
  • Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
  • Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
  • Solid grasp of RESTful API development and integration.

Preferred Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
  • 2–5 years of experience in Python development with a focus on AI/ML.
  • Exposure to MLOps practices and model monitoring tools.


Read more
Genspark India

Genspark India

Agency job
via Genspark by S Priyadharshini
Chennai
1 - 3 yrs
₹3L - ₹6L / yr
Embedded C
skill iconC++
skill iconC
dbms
DSA
+2 more

Genspark is hiring Professionals for C Development for there Premium Client

Work Location- Chennai 

Entry Criteria 

Graduate from Any Engineering Background /BSc/MSc /MCA with  specialization(Computer/Electronics/IT ) 

Minimum 1 year experience in Industry 

 Working Knowledge of C/Embedded/C++/DSA 

Programming Aptitude (Any Language) 

Basic understanding of programming constructs: variables, loops, conditionals, functions 

Logical thinking and algorithmic approach 

Computer Science Fundamentals: 

Data structures basics: arrays, stacks, queues, linked lists 

Operating System basics: what is a process/thread, memory, file system, etc. 

Basic understanding of compilation, runtime, networking and sockets etc. 

Problem Solving & Logical Reasoning 

Ability to trace logic, find errors, and reason through pseudocode 

Analytical and debugging capabilities 

Learning Attitude & Communication 

Demonstrated interest in low-level or systems programming (even if no experience) 

Willingness to learn C and work close to the OS level 

Clarity of thought and ability to explain what they do know 

Soft Skills : 

Able to explain and communicate the thoughts clearly in English 

Confident in solving new problems independently or with guidance 

Willingness to take feedback and iterate 

Evaluation Process 

Candidates will be assigned an online test,  followed by Technical Screening. 

Shortlisted Candidates will have to appear for a F2F Interview with the Client, Chennai. 

 

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Pune, Chennai, Nagpur, Indore, Ahmedabad, Kochi (Cochin), Delhi
3.5 - 8 yrs
₹4L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
skill iconPython

Role Overview:


We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.


Key Responsibilities:

  • Design and develop backend services, APIs, and microservices using Golang.
  • Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
  • Optimize application performance, scalability, and reliability.
  • Collaborate closely with frontend, DevOps, and product teams.
  • Write clean, maintainable code and participate in code reviews.
  • Implement best practices in security, performance, and cloud architecture.
  • Contribute to CI/CD pipelines and automated deployment processes.
  • Debug and resolve technical issues across the stack.


Required Skills & Qualifications:

  • 3.5+ years of hands-on experience with Golang development.
  • Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
  • Proficient in developing and consuming RESTful APIs.
  • Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
  • Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
  • Good understanding of microservices architecture and distributed systems.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with Git, CI/CD pipelines, and agile workflows.
  • Strong problem-solving, debugging, and communication skills.


Nice to Have:

  • Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
  • Exposure to NoSQL databases like DynamoDB or MongoDB.
  • Contributions to open-source Golang projects or an active GitHub portfolio.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Chennai
8 - 12 yrs
₹10L - ₹26L / yr
skill iconPython
skill iconMachine Learning (ML)
Scikit-Learn
TensorFlow
PyTorch
+10 more

Job Title : Senior Machine Learning Engineer

Experience : 8+ Years

Location : Chennai

Notice Period : Immediate Joiners Only

Work Mode : Hybrid


Job Summary :

We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.

The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.


Mandatory Skills :

  • Programming Languages : Python
  • Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
  • ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
  • Operating Systems : RHEL or any Unix-based OS
  • Databases : Oracle or any relational database
  • Version Control : Git
  • Development Methodologies : Agile

Desired Skills :

  • Experience with issue tracking tools such as Azure DevOps or JIRA.
  • Understanding of data science concepts.
  • Familiarity with Big Data algorithms, models, and libraries.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Mumbai, Gurugram, Pune, Hyderabad, Chennai
3 - 6 yrs
₹5L - ₹20L / yr
IBM Sterling Integrator Developer
IBM Sterling B2B Integrator
Shell Scripting
skill iconPython
SQL
+1 more

Job Title : IBM Sterling Integrator Developer

Experience : 3 to 5 Years

Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune

Employment Type : Full-Time


Job Description :

We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.

The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.

Key Responsibilities :

  • Develop, configure, and maintain IBM Sterling Integrator solutions.
  • Design and implement integration solutions using IBM Sterling.
  • Collaborate with cross-functional teams to gather requirements and provide solutions.
  • Work with custom languages and scripting to enhance and automate integration processes.
  • Ensure optimal performance and security of integration systems.

Must-Have Skills :

  • Hands-on experience with IBM Sterling Integrator and associated integration tools.
  • Proficiency in at least one custom scripting language.
  • Strong command over Shell scripting, Python, and SQL (mandatory).
  • Good understanding of EDI standards and protocols is a plus.

Interview Process :

  • 2 Rounds of Technical Interviews.

Additional Information :

  • Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Mumbai, Pune, Chennai, Gurugram
5.6 - 7 yrs
₹10L - ₹28L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL

Job Summary:

As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.

Key Responsibilities:

  • Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
  • Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
  • Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
  • Work with AWS DMS and RDS for database integration and migration
  • Optimize data flows and system performance for speed and cost-effectiveness
  • Deploy and manage infrastructure using AWS CloudFormation templates
  • Collaborate with cross-functional teams to gather requirements and build robust data solutions
  • Ensure data integrity, quality, and security across all systems and processes

Required Skills & Experience:

  • 6+ years of experience in Data Engineering with strong AWS expertise
  • Proficient in Python and PySpark for data processing and ETL development
  • Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
  • Strong SQL skills for building complex queries and performing data analysis
  • Familiarity with AWS CloudFormation and infrastructure as code principles
  • Good understanding of serverless architecture and cost-optimized design
  • Ability to write clean, modular, and maintainable code
  • Strong analytical thinking and problem-solving skills


Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Pune, Bengaluru (Bangalore), Gurugram, Chennai, Mumbai
5 - 7 yrs
₹6L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
AWS Glue
skill iconPython
PySpark

Position: AWS Data Engineer

Experience: 5 to 7 Years

Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram

Work Mode: Hybrid (3 days work from office per week)

Employment Type: Full-time

About the Role:

We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.

Key Responsibilities:

  • Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
  • Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
  • Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
  • Optimize data models and storage for cost-efficiency and performance.
  • Write advanced SQL queries to support complex data analysis and reporting requirements.
  • Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
  • Ensure high data quality and integrity across platforms and processes.
  • Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.

Required Skills & Experience:

  • Strong hands-on experience with Python or PySpark for data processing.
  • Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Familiarity with serverless architectures and AWS best practices.
  • Experience in designing and maintaining robust data architectures and data lakes.
  • Ability to troubleshoot and resolve data pipeline issues efficiently.
  • Strong communication and stakeholder management skills.


Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Eman Khan
Posted by Eman Khan
Chennai
7 - 12 yrs
₹40L - ₹90L / yr
skill iconMachine Learning (ML)
skill iconPython
Scikit-Learn
NumPy
pandas
+9 more

Are you passionate about the power of data and excited to leverage cutting-edge AI/ML to drive business impact? At Poshmark, we tackle complex challenges in personalization, trust & safety, marketing optimization, product experience, and more.


Why Poshmark?

As a leader in Social Commerce, Poshmark offers an unparalleled opportunity to work with extensive multi-platform social and commerce data. With over 130 million users generating billions of daily events and petabytes of rapidly growing data, you’ll be at the forefront of data science innovation. If building impactful, data-driven AI solutions for millions excites you, this is your place.


What You’ll Do

  • Drive end-to-end data science initiatives, from ideation to deployment, delivering measurable business impact through projects such as feed personalization, product recommendation systems, and attribute extraction using computer vision.
  • Collaborate with cross-functional teams, including ML engineers, product managers, and business stakeholders, to design and deploy high-impact models.
  • Develop scalable solutions for key areas like product, marketing, operations, and community functions.
  • Own the entire ML Development lifecycle: data exploration, model development, deployment, and performance optimization.
  • Apply best practices for managing and maintaining machine learning models in production environments.
  • Explore and experiment with emerging AI trends, technologies, and methodologies to keep Poshmark at the cutting edge.


Your Experience & Skills

  • Ideal Experience: 6-9 years of building scalable data science solutions in a big data environment. Experience with personalization algorithms, recommendation systems, or user behavior modeling is a big plus.
  • Machine Learning Knowledge: Hands-on experience with key ML algorithms, including CNNs, Transformers, and Vision Transformers. Familiarity with Large Language Models (LLMs) and techniques like RAG or PEFT is a bonus.
  • Technical Expertise: Proficiency in Python, SQL, and Spark (Scala or PySpark), with hands-on experience in deep learning frameworks like PyTorch or TensorFlow. Familiarity with ML engineering tools like Flask, Docker, and MLOps practices.
  • Mathematical Foundations: Solid grasp of linear algebra, statistics, probability, calculus, and A/B testing concepts.
  • Collaboration & Communication: Strong problem-solving skills and ability to communicate complex technical ideas to diverse audiences, including executives and engineers.
Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Bengaluru (Bangalore), Pune, Mumbai, Chennai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
aws
Amazon Redshift
+1 more

Position: AWS Data Engineer

Experience: 5 to 7 Years

Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram

Work Mode: Hybrid (3 days work from office per week)

Employment Type: Full-time

About the Role:

We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.

Key Responsibilities:

  • Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
  • Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
  • Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
  • Optimize data models and storage for cost-efficiency and performance.
  • Write advanced SQL queries to support complex data analysis and reporting requirements.
  • Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
  • Ensure high data quality and integrity across platforms and processes.
  • Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.

Required Skills & Experience:

  • Strong hands-on experience with Python or PySpark for data processing.
  • Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Familiarity with serverless architectures and AWS best practices.
  • Experience in designing and maintaining robust data architectures and data lakes.
  • Ability to troubleshoot and resolve data pipeline issues efficiently.
  • Strong communication and stakeholder management skills.


Read more
Bengaluru and chennai based tech startup

Bengaluru and chennai based tech startup

Agency job
via Recruit Square by Priyanka choudhary
Bengaluru (Bangalore), Chennai
6 - 12 yrs
₹19L - ₹35L / yr
Linux/Unix
TCP/IP
Windows Azure
skill iconAmazon Web Services (AWS)
SaaS
+2 more

Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on

Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.

Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.

Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.

Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.

Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.

Preferably possess a valid RHCE (Red Hat Certified Engineer) certification

Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)

Must possess advanced knowledge in TCP/IP concepts and fundamentals.  Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.

Fundamental understanding of proxy & PAC file.

Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.

Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.

Coding skills such as Perl, Python, Shell scripting will be advantageous.

Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.

Excellent communication skills – oral, written and presentation, across various types of target audiences.

Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Bengaluru (Bangalore), Pune, Chennai, Mumbai, Gurugram
5 - 7 yrs
₹5L - ₹19L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
SQL
redshift

Profile: AWS Data Engineer

Mode- Hybrid

Experience- 5+7 years

Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram


Roles and Responsibilities

  • Design and maintain ETL pipelines using AWS Glue and Python/PySpark
  • Optimize SQL queries for Redshift and Athena
  • Develop Lambda functions for serverless data processing
  • Configure AWS DMS for database migration and replication
  • Implement infrastructure as code with CloudFormation
  • Build optimized data models for performance
  • Manage RDS databases and AWS service integrations
  • Troubleshoot and improve data processing efficiency
  • Gather requirements from business stakeholders
  • Implement data quality checks and validation
  • Document data pipelines and architecture
  • Monitor workflows and implement alerting
  • Keep current with AWS services and best practices


Required Technical Expertise:

  • Python/PySpark for data processing
  • AWS Glue for ETL operations
  • Redshift and Athena for data querying
  • AWS Lambda and serverless architecture
  • AWS DMS and RDS management
  • CloudFormation for infrastructure
  • SQL optimization and performance tuning
Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Pune, Mumbai, Bengaluru (Bangalore), Chennai
4 - 7 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
Glue semantics
Amazon Redshift
+1 more

Job Overview:

We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.

Key Responsibilities:

  • Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
  • Integrate data from diverse sources and ensure its quality, consistency, and reliability.
  • Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
  • Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
  • Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
  • Automate data validation, transformation, and loading processes to support real-time and batch data processing.
  • Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.

Required Skills:

  • 5 to 7 years of hands-on experience in data engineering roles.
  • Strong proficiency in Python and PySpark for data transformation and scripting.
  • Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
  • Solid understanding of SQL and database optimization techniques.
  • Experience working with large-scale data pipelines and high-volume data environments.
  • Good knowledge of data modeling, warehousing, and performance tuning.

Preferred/Good to Have:

  • Experience with workflow orchestration tools like Airflow or Step Functions.
  • Familiarity with CI/CD for data pipelines.
  • Knowledge of data governance and security best practices on AWS.
Read more
Linarc Inc

at Linarc Inc

3 recruiters
jhansi peter
Posted by jhansi peter
Chennai
4 - 9 yrs
₹15L - ₹35L / yr
skill iconPython
skill iconDjango

What We’re Looking For

  • 4+ years of backend development experience in scalable web applications.
  • Strong expertise in Python, Django ORM, and RESTful API design.
  • Familiarity with relational databases like PostgreSQL and MySQL databases
  • Comfortable working in a startup environment with multiple priorities.
  • Understanding of cloud-native architectures and SaaS models.
  • Strong ownership mindset and ability to work with minimal supervision.
  • Excellent communication and teamwork skills.


Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai
2 - 4 yrs
₹15L - ₹20L / yr
Data engineering
skill iconPython
SQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

What you’ll do

  • Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
  • Work closely with business, product, analytics, and ML teams to define data needs
  • Ensure high data quality, lineage, versioning, and observability
  • Optimize performance of batch and streaming jobs
  • Automate and scale ingestion, transformation, and monitoring workflows
  • Document data models and key business metrics in a self-serve way
  • Use AI tools to accelerate development, troubleshooting, and documentation


Must-Haves:

  • 2–4 years of experience as a data engineer (product or analytics-focused preferred)
  • Solid hands-on experience with Python and SQL
  • Experience with data pipeline orchestration tools like Airflow or Prefect
  • Understanding of data modeling, warehousing concepts, and performance optimization
  • Familiarity with cloud platforms (GCP, AWS, or Azure)
  • Bachelor's in Computer Science, Data Engineering, or a related field
  • Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)


Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Pune, Chennai
3 - 6 yrs
₹2L - ₹12L / yr
Test Automation (QA)
Automation
Software Testing (QA)
Generative AI
Selenium
+7 more

Job Title : Automation Quality Engineer (Gen AI)

Experience : 3 to 5+ Years

Location : Bangalore / Chennai / Pune


Role Overview :

We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.

You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.


Key Responsibilities :

  • Develop and maintain test strategies for AI models, APIs, and user interfaces.
  • Build automation frameworks and integrate into CI/CD pipelines.
  • Validate model accuracy, robustness, and monitor model drift.
  • Perform regression, performance, load, and security testing.
  • Log and track issues; collaborate with developers to resolve them.
  • Ensure compliance with data privacy and ethical AI standards.
  • Document QA processes and testing outcomes.

Mandatory Skills :

  • Test Automation : Selenium, Playwright, or Deep Eval
  • Programming/Scripting : Python, JavaScript
  • API Testing : Postman, REST Assured
  • Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
  • Performance Testing : JMeter
  • Bug Tracking : Azure DevOps
  • Methodologies : Agile delivery experience
  • Soft Skills : Strong communication and problem-solving abilities
Read more
BigRio
Disha Bhardwaj
Posted by Disha Bhardwaj
Chennai
8 - 12 yrs
₹25L - ₹30L / yr
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
Artificial Intelligence (AI)
skill iconPython

Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid

Duration: Full time


Job Summary:

About the Role:

We are growing our Data Science and Data Engineering team and are looking for an

experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.


Responsibilities:

·        Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.

·        Analyze existing data entry workflows and propose automation opportunities.

Design:

·        Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.

·        Collaborate with clients to define project scopes and objectives.

Technology Selection:

·        Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.

·        Ensure seamless integration with existing systems and workflows.

Prototyping and Testing:

·        Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.

·        Conduct rigorous testing to ensure accuracy and reliability.

Implementation and Integration:

·        Work closely with clients and IT teams to integrate AI solutions effectively.

·        Provide technical support during the implementation phase.

Training and Documentation:

·        Develop training materials for end-users and support staff.

·        Create comprehensive documentation for implemented solutions.

Continuous Improvement:


·        Monitor and optimize the performance of deployed solutions.

·        Identify opportunities for further automation and improvement.

Qualifications:

·        Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).

·        Proven experience in developing and implementing AI solutions for data entry automation.

·        Expertise in NLP, LLM and other machine-learning techniques.

·        Strong programming skills, especially in Python.

·        Familiarity with healthcare data privacy and regulatory requirements.


Additional Qualifications( great to have):

An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
All time design
Prem kumar
Posted by Prem kumar
Chennai
1 - 3 yrs
₹2L - ₹4L / yr
skill iconFlask
skill iconPython
skill iconMongoDB
RESTful APIs
Payment gateways

Job description


We are looking for an experienced Python developer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers.


Responsibilities:


  • Coordinating with development teams to determine application requirements.


  • Writing scalable code using Python programming language.


  • Testing and debugging applications.


  • Developing back-end components.


  • Integrating user-facing elements using server-side logic.


  • Assessing and prioritizing client feature requests.


  • Integrating data storage solutions.


  • Coordinating with front-end developers.


  • Reprogramming existing databases to improve functionality.


  • Developing digital tools to monitor online traffic.


Requirements:


  • Bachelor's degree in Computer Science, Computer Engineering, or related field.


  • 2-7 years of experience as a Python Developer.


  • Expert knowledge of Python and Flask framework and Fast API.


  • Solid experience in MongoDB, Elastic Search.


  • Work experience in Restful API


  • A deep understanding and multi-process architecture and the threading limitations of Python.


  • Ability to integrate multiple data sources into a single system.


  • Familiarity with testing tools.


  • Ability to collaborate on projects and work independently when required.


  • Excellent troubleshooting skills.


  • Good project management skills.


SKILLS:


  • PHYTHON
  • MONGODB
  • FLASK
  • REST API DEVELOPMENT
  • TWILIO


Job Type: Full-time


Pay: ₹10,000.00 - ₹30,000.00 per month


Benefits:

  • Flexible schedule
  • Paid time off


Schedule:

  • Day shift


Supplemental Pay:

  • Overtime pay


Ability to commute/relocate:

  • Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required)


Experience:

  • Python: 1 year (Required)


Work Location: In person

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
6 - 14 yrs
₹6L - ₹25L / yr
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI

Key Responsibilities:

  • Develop and maintain scalable Python applications for AI/ML projects.
  • Design, train, and evaluate machine learning models for classification, regression, NLP, computer vision, or recommendation systems.
  • Collaborate with data scientists, ML engineers, and software developers to integrate models into production systems.
  • Optimize model performance and ensure low-latency inference in real-time environments.
  • Work with large datasets to perform data cleaning, feature engineering, and data transformation.
  • Stay current with new developments in machine learning frameworks and Python libraries.
  • Write clean, testable, and efficient code following best practices.
  • Develop RESTful APIs and deploy ML models via cloud or container-based solutions (e.g., AWS, Docker, Kubernetes).


Share Cv to


Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Hyderabad, Bengaluru (Bangalore), Pune, Mumbai, Kolkata, Delhi, Noida
12 - 14 yrs
₹11L - ₹27L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+4 more

Responsibilities

  • Develop and maintain robust APIs to support various applications and services.
  • Design and implement scalable solutions using AWS cloud services.
  • Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
  • Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
  • Ensure the security and integrity of applications by implementing best practices and security measures.
  • Optimize application performance and troubleshoot issues to ensure smooth operation.
  • Provide technical guidance and mentorship to junior team members.
  • Conduct code reviews to ensure adherence to coding standards and best practices.
  • Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
  • Develop and maintain documentation for code processes and procedures.
  • Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
  • Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
  • Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.

 

Qualifications

  • Possess strong expertise in developing and maintaining APIs.
  • Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
  • Have extensive experience with Python frameworks such as Flask and Django.
  • Exhibit strong analytical and problem-solving skills to address complex technical challenges.
  • Show ability to collaborate effectively with cross-functional teams and stakeholders.
  • Display excellent communication skills to convey technical concepts clearly.
  • Have a background in the Consumer Lending domain is a plus.
  • Demonstrate commitment to continuous learning and staying updated with industry trends.
  • Possess a strong understanding of agile development methodologies.
  • Show experience in mentoring and guiding junior team members.
  • Exhibit attention to detail and a commitment to delivering high-quality software solutions.
  • Demonstrate ability to work effectively in a hybrid work model.
  • Show a proactive approach to identifying and addressing potential issues before they become problems.
Read more
Rigel Networks Pvt Ltd
Minakshi Soni
Posted by Minakshi Soni
Bengaluru (Bangalore), Pune, Mumbai, Chennai
8 - 12 yrs
₹8L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Terraform
Amazon Redshift
Redshift
Snowflake
+16 more

Dear Candidate,


We are urgently Hiring AWS Cloud Engineer for Bangalore Location.

Position: AWS Cloud Engineer

Location: Bangalore

Experience: 8-11 yrs

Skills: Aws Cloud

Salary: Best in Industry (20-25% Hike on the current ctc)

Note:

only Immediate to 15 days Joiners will be preferred.

Candidates from Tier 1 companies will only be shortlisted and selected

Candidates' NP more than 30 days will get rejected while screening.

Offer shoppers will be rejected.


Job description:

 

Description:

 

Title: AWS Cloud Engineer

Prefer BLR / HYD – else any location is fine

Work Mode: Hybrid – based on HR rule (currently 1 day per month)


Shift Timings 24 x 7 (Work in shifts on rotational basis)

Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.

Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting



Experience and Skills Requirements:


Experience:

8 years of experience in a technical role working with AWS


Mandatory

Technical troubleshooting and problem solving

AWS management of large-scale IaaS PaaS solutions

Cloud networking and security fundamentals

Experience using containerization in AWS

Working Data warehouse knowledge Redshift and Snowflake preferred

Working with IaC – Terraform and Cloud Formation

Working understanding of scripting languages including Python and Shell

Collaboration and communication skills

Highly adaptable to changes in a technical environment

 

Optional

Experience using monitoring and observer ability toolsets inc. Splunk, Datadog

Experience using Github Actions

Experience using AWS RDS/SQL based solutions

Experience working with streaming technologies inc. Kafka, Apache Flink

Experience working with a ETL environments

Experience working with a confluent cloud platform


Certifications:


Minimum

AWS Certified SysOps Administrator – Associate

AWS Certified DevOps Engineer - Professional



Preferred


AWS Certified Solutions Architect – Associate


Responsibilities:


Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.


The following is a list of expected responsibilities:


To manage and support a customer’s AWS platform

To be technical hands on

Provide Incident and Problem management on the AWS IaaS and PaaS Platform

Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner

Actively monitor an AWS platform for technical issues

To be involved in the resolution of technical incidents tickets

Assist in the root cause analysis of incidents

Assist with improving efficiency and processes within the team

Examining traces and logs

Working with third party suppliers and AWS to jointly resolve incidents


Good to have:


Confluent Cloud

Snowflake




Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Rigel Networks

Worldwide Locations: USA | HK | IN 

Read more
Coinfantasy
Indira Priyadharshini
Posted by Indira Priyadharshini
Remote, Chennai
3 - 10 yrs
₹10L - ₹40L / yr
skill iconPython
PyTorch
Large Language Models (LLM) tuning
Large Language Models (LLM)
Generative AI
+2 more

CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.

Job Responsibilities

  • Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
  • Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
  • Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
  • Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
  • Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
  • Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
  • Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.

Requirements

  • Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
  • 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
  • Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
  • Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
  • Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
  • Strong communication skills, adept at presenting complex technical solutions to diverse audiences.

About Us

CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.

Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.

Website:

Benefits

  • Competitive Salary
  • An opportunity to be part of the Core team in a fast-growing company
  • A fulfilling, challenging and flexible work experience
  • Practically unlimited professional and career growth opportunities


Read more
Koolioai
Swarna M
Posted by Swarna M
Chennai
0 - 1 yrs
₹15000 - ₹20000 / mo
skill iconPython
skill iconFlask

About koolio.ai


Website: www.koolio.ai


Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.


About the Internship Position

We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.


Key Responsibilities:

  • Assist in the development and maintenance of backend systems and APIs.
  • Write reusable, testable, and efficient code to support scalable web applications.
  • Work with cloud services and server-side technologies to manage data and optimize performance.
  • Troubleshoot and debug existing backend systems, ensuring reliability and performance.
  • Collaborate with cross-functional teams to integrate frontend features with backend logic.


Requirements and Skills:

  • Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Good understanding of server-side technologies like Python
  • Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
  • Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
  • Knowledge of version control systems such as Git.
  • Soft Skills:
  • Eagerness to learn and adapt in a fast-paced environment.
  • Strong problem-solving and critical-thinking skills.
  • Effective communication and teamwork capabilities.
  • Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.


Why Join Us?

  • Gain real-world experience working on a cutting-edge platform.
  • Work alongside a talented and passionate team committed to innovation.
  • Receive mentorship and guidance from industry experts.
  • Opportunity to transition to a full-time role based on performance and company needs.


This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.

Read more
Koolioai
Swarna M
Posted by Swarna M
Remote, Chennai
5 - 7 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconFlask
Google Cloud Platform (GCP)

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.

Key Responsibilities:

  • Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
  • Design and build efficient, secure, and modular client-side and server-side architecture
  • Develop high-performance web applications with reusable and maintainable code
  • Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
  • Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
  • Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance

Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
  • Technical Skills:
  • Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
  • Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
  • Familiarity with NoSQL and PostgreSQL databases
  • Experience working with audio/video processing libraries is a strong plus
  • Soft Skills:
  • Strong problem-solving skills and the ability to think critically about issues and solutions
  • Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
  • Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
  • Keen attention to detail and a passion for delivering high-quality, scalable solutions
  • Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment

Compensation and Benefits:

  • Total Yearly Compensation: ₹25 LPA based on skills and experience
  • Health Insurance: Comprehensive health coverage provided by the company
  • ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Read more
Saptang Labs

at Saptang Labs

2 candid answers
Kamaleshm B
Posted by Kamaleshm B
Chennai
1 - 2 yrs
₹4L - ₹7L / yr
Engineering Management
skill iconJava
skill iconNodeJS (Node.js)
skill iconPython
skill iconAndroid Development
+4 more

Responsibilities:

• Analyze and understand business requirements and translate them into efficient, scalable business logic.

• Develop, test, and maintain software that meets new requirements and integrates well with existing systems.

• Troubleshoot and debug software issues and provide solutions.

• Collaborate with cross-functional teams to deliver high-quality products, including product managers, designers, and developers.

• Write clean, maintainable, and efficient code.

• Participate in code reviews and provide constructive feedback to peers.

• Communicate effectively with team members and stakeholders to understand requirements and provide updates.


Required Skills:

• Strong problem-solving skills with the ability to analyze complex issues and provide solutions.

• Ability to quickly understand new problem statements and translate them into functional business logic.

• Proficiency in at least one programming language such as Java, Node.js, or C/C++.

• Strong understanding of software development life cycle (SDLC).

• Excellent communication skills, both verbal and written.

• Team player with the ability to collaborate effectively with different teams.


Preferred Qualifications:

• Experience with Java, Golang, or Rust is a plus.

• Familiarity with cloud platforms, microservices architecture, and API development.

• Prior experience working in an agile environment.

• Strong debugging and optimization skills.


Educational Qualifications:

• Bachelor's degree in Computer Science, Engineering, related field, or equivalent work experience.

Read more
Smartan.ai

at Smartan.ai

2 candid answers
Aadharsh M
Posted by Aadharsh M
Chennai
4 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
NumPy
TensorFlow
PyTorch
Google Cloud Platform (GCP)
+4 more

Role Overview:

We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.


Key Responsibilities:

  • Develop, implement, and optimize machine learning models and algorithms to support product development.
  • Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
  • Collaborate with cross-functional teams to define data requirements and product taxonomy.
  • Design and build scalable data pipelines and systems to support real-time data processing and analysis.
  • Ensure the accuracy and quality of data used for modeling and analytics.
  • Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
  • Implement best practices for data governance, privacy, and security.
  • Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.


Qualifications:

  • Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
  • 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
  • Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
  • Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
  • Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
  • Hands-on experience with data visualization tools and techniques.
  • Strong understanding of statistics, data analysis, and machine learning concepts.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a fast-paced, dynamic environment.


Preferred Qualifications:

  • Knowledge of microservices architecture and RESTful APIs.
  • Familiarity with Agile development methodologies.
  • Experience in building taxonomy for data products.
  • Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Read more
Optimum

Optimum

Agency job
via Pluginlive by Harsha Saggi
Chennai, Bengaluru (Bangalore)
3 - 14 yrs
₹15L - ₹26L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
SQL

Company: Optimum Solutions

About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.

Role Overview:

  • Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
  • Implementing automated testing platforms, unit tests, and CICD Pipeline
  • Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
  • Understanding of container platform, such as Docker

Job Description

  • We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
  • Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
  • You will be responsible for prompting and building use case Pipelines
  • Perform the Evaluation of all the Gen-AI features and Usecase pipeline

Position: AI ML Engineer

Location: Chennai (Preference) and Bangalore

Minimum Qualification:  Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.

Experience:  4-6 years

CTC: 16.5 - 17 LPA

Employment Type:  Full Time

Key Responsibilities:

  • Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
  • Design and develop prompts suiting project needs
  • Lead and manage team of prompt engineers
  • Stakeholder management across business and domains as required for the projects
  • Evaluating base models and benchmarking performance
  • Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
  •  Develop, deploy and maintain auto prompt solutions
  • Design and implement minimum design standards for every use case involving prompt engineering

Skills and Qualifications

  • Strong proficiency with Python, DJANGO framework and REGEX
  • Good understanding of Machine learning framework Pytorch and Tensorflow
  • Knowledge of Generative AI and RAG Pipeline
  • Good in microservice design pattern and developing scalable application.
  • Ability to build and consume REST API
  • Fine tune and perform code optimization for better performance.
  • Strong understanding on OOP and design thinking
  • Understanding the nature of asynchronous programming and its quirks and workarounds
  • Good understanding of server-side templating languages
  • Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
  • Integration of APIs, multiple data sources and databases into one system
  • Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
  • Understanding fundamental design principles behind a scalable and distributed application
  • Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
  • Experience in deploying on Cloud platform like Azure or AWS
  • Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort