50+ Python Jobs in Chennai | Python Job openings in Chennai
Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.



About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a ML/ AI Engineer, you will be responsible for designing and developing intelligent software to solve business problems. You will collaborate with data scientists and domain experts to incorporate ML and AI technologies into existing or new workflows. You’ll analyze new opportunities and ideas. You’ll train and evaluate ML models, conduct experiments, develop PoCs and prototypes.
Responsibilities
- Designing, training, improving & launching machine learning models using tools such as XGBoost, Tensorflow, PyTorch.
- Own the end-to-end ML lifecycle and MLOps, including model deployment, performance tuning, on-going evaluation and maintenance.
- Improve the way we evaluate and monitor model and system performances.
- Proposing and implementing ideas that directly impact our operational and strategic metrics.
- Create tools and frameworks that accelerate the delivery of ML/ AI products.
Who you are
You are an engineer who is passionate about using AL/ML to improve processes, products and delight customers. You have experience working with less than clean data, developing ML models, and orchestrating the deployment of them to production. You thrive on taking initiatives, are very comfortable with ambiguity and can passionately defend your decisions.
Requirements and skills
- 4+ years of experience in programming languages such as Python, PySpark, or Scala.
- Proficient Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes), and MLOps practices and platforms like MLflow.
- Strong understanding of ML algorithms and frameworks (e.g., TensorFlow, PyTorch).
- Experience with AI foundational models and associated architectural and solution development frameworks
- Broad understanding of data structures, data engineering, statistical methodologies and machine learning models.
- Strong communication skills and teamwork.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Scientist at Moative, you’ll play a crucial role in extracting valuable insights from data to drive informed decision-making. You’ll work closely with cross-functional teams to build predictive models and develop solutions to complex business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Support end-to-end development and deployment of ML/ AI models - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Gather, prepare and analyze data, write code to develop and validate models, and continuously monitor and update them as needed.
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present findings and communicate insights to non-technical audiences
Skills & Requirements
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
- Strong communication skills
- Strong collaboration skills, continuous learning attitude and a problem solving mind-set
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Full Stack AI Developer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of users by transforming how services are delivered and experienced. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Design and develop AI powered applications, including intuitive user interfaces at the front-end and robust back-end of web applications.
- Develop full-stack features in cloud-native environments with proficiency in HTML, CSS, JavaScript (popular frameworks such as React or Angular) as well as server-side languages such as Node.js and Python.
- Utilize and adapt foundational models and mulit-modal LLMs (voice, image, text) as the core building blocks for developing impactful products aimed at improving service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent agentic AI workflows that automate and optimize key processes. You will be involved in the full lifecycle from conceptualization and design to implementation and monitoring.
- Contribute directly to enhancing our model evaluation, monitoring and fine-tuning methodologies to ensure robust and reliable system performance.
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying real-world applications, including ensuring integration between user interfaces and server-side logic. You are excellent at writing clean, efficient code for both front-end and backend, and have strong ability to conduct regular testing and debugging to ensure optimal performance and a smooth user experience. You are excited at the possibility of embedding AI technologies to develop intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of experience in full stack development
- Proficiency in HTML, CSS, JavaScript (popular frameworks such as React or Angular).
- Strong knowledge of server-side languages such as Node.js and Python.
- Experience with databases like MySQL, PostgreSQL, or MongoDB.
- Knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Familiarity with version control systems, especially Git.
- 1 year experience with AI models and tools, including deploying multi-modal LLMs (text, images, voice) as part of business applications.
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.



About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

We are looking for a skilled Automation Anywhere Engineer with a strong background in RPA development, Python scripting, and experience with CoPilot integrations. The ideal candidate will play a key role in designing, developing, and implementing automation solutions to streamline business processes and improve operational efficiency.
Required Skills:
- 2–6 years of hands-on experience in Automation Anywhere (A2019 or higher).
- Strong programming skills in Python for automation and integration.
- Good understanding of RPA concepts, lifecycle, and best practices.
- Experience working with CoPilot (Microsoft Power Platform/AI CoPilot or equivalent).
- Knowledge of API integration and web services (REST/SOAP).
- Familiarity with process analysis and design techniques.
- Ability to write clean, reusable, and well-documented code.
- Strong problem-solving and communication skills.

· Design, develop, and implement AI/ML models and algorithms.
· Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.
· Write clean, efficient, and well-documented code.
· Collaborate with data engineers to ensure data quality and availability for model training and evaluation.
· Work closely with senior team members to understand project requirements and contribute to technical solutions.
· Troubleshoot and debug AI/ML models and applications.
· Stay up-to-date with the latest advancements in AI/ML.
· Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.
· Develop and deploy AI solutions on Google Cloud Platform (GCP).
· Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.
· Utilize Vertex AI for model training, deployment, and management.
· Integrate and leverage Google Gemini for specific AI functionalities.
Qualifications:
· Bachelor’s degree in computer science, Artificial Intelligence, or a related field.
· 3+ years of experience in developing and implementing AI/ML models.
· Strong programming skills in Python.
· Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
· Good understanding of machine learning concepts and techniques.
· Ability to work independently and as part of a team.
· Strong problem-solving skills.
· Good communication skills.
· Experience with Google Cloud Platform (GCP) is preferred.
· Familiarity with Vertex AI is a plus.

- Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
- Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues
- Develop and maintain APIs and Services in Node.js/Python
- Develop and maintain web-based UI’s using front-end frameworks
- Participate in code reviews, unit testing and integration testing
- Participate in the full software development lifecycle, from concept and design to implementation and support
- Ensure application performance, scalability, and security through best practices in coding, testing and deployment
- Collaborate with DevOps team for troubleshooting deployment issues
Qualification
● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration
● Proficiency in Node.js, Typescript, React, Express framework
● In-depth knowledge of databases such as MongoDB
● Proficient in HTML5, CSS3, and responsive UI design
● Proficiency in any Python development framework is a plus
● Strong direct experience in functional and object oriented programming using Javascript
● Experience with cloud platforms (Azure preferred)
● Microservices architecture and containerization
● Expertise in performance monitoring, tuning, and optimization
● Understanding of DevOps practices for automated deployments
● Understanding of software design patterns and best practices
● Practical experience working in Agile developments (scrum)
● Excellent critical thinking skills and the ability to mentor junior team members
● Effectively communicate and collaborate with cross-functional teams
● Strong capability to work independently and deliver results within tight deadlines
● Strong problem-solving abilities and attention to detail


· 3 to 5 years of full-stack development experience implementing applications using Python, React.js
· In-depth knowledge of Python – Data Analytics, NLP and Flask API
§ Experience working in SQL Databases (MySQL/Postgres – min. 2 years)
§ Ability to use Gen AI tools for Productivity.
§ Gen AI for Natural Language processing Use cases – Using ChatGPT 4/Gemini flash or other cutting-edge tools
§ Hands-on exposure in working with messaging systems like RabbitMQ
§ Experience in the end to end and unit testing frameworks (jest/cypress)
§ Experience working in NoSQL Databases like MongoDB
· Understanding differences between multiple delivery platforms, such as mobile vs. desktop, and optimizing output to match the specific platform.
§ Cloud architectural knowledge of Azure cloud.
§ Proficient understanding of code versioning tools, such as Git, SVN
§ Knowledge of CI/CD (Jenkins/Hudson)
§ Self-organizing & experience working in Agile/Scrum culture
Good to have.
§ Experience working in Angular, Elasticsearch and Redis
§ Understanding accessibility and security compliances
§ Understanding of UI/UX


Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.
Responsibilities:
Ensure the quality of product feature development.
Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.
Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.
Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.
Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.
Collaboration: Participate in project planning and execution with the team for efficient project delivery.
Requirements:
A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.
2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.
A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.
Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).
Good team player with a proactive approach to continuous learning.
Sound understanding of the Agile software development methodology.
Experience in a SaaS-based product company or a fast-paced startup environment is a plus.

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.


At BigThinkCode, our technology solves complex problems. We are looking for a highly talented engineer to join our technology team at Chennai.
Our ideal candidate will have expert knowledge of software development processes, programming and problem-solving skills.
This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.
Below job description for your reference, if interested please share your profile to connect and discuss.
Company: BigThinkCode Technologies
URL: https://www.bigthinkcode.com/
Experience: 4 – 5 years
Location: Chennai (Hybrid)
Responsibilities:
· Work closely as part of the tech team to build new features.
· Collaborate with managers, designers, and engineers to deliver user-facing features
· Write reusable code and build libraries for later use
· Utilize knowledge of programming languages and the software ecosystem to accomplish goals.
· Design software systems and supporting infrastructure
· Contribute to the technical roadmap
Required skills:
· Familiar with Algorithm, Data structures.
· Expertise in OOPs concepts and its implementation.
· Hands on real-time experience using Design Patterns, Testing & Debugging skills.
· Familiar with Python programming language.
· Experience in one or more python frameworks like Flask or Django, FastAPI
· Conduct code reviews to ensure code quality.
· Database hands on (Relational / NoSQL / ORMs) experience.
· Nice to have deployment and Devops skills like Cloud and Docker.
Benefits:
· Medical cover for employee and eligible dependents.
· Tax beneficial salary structure.
· Comprehensive leave policy
· Competency development training programs.

Job Title: AI Solutioning Architect – Healthcare IT
Role Summary:
The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).
Key Responsibilities:
- Architect scalable AI solutions from data ingestion to deployment.
- Align AI initiatives with business objectives and regulatory requirements (HIPAA).
- Collaborate with cross-functional teams to deliver AI projects.
- Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
- Mentor technical teams and ensure best practices in MLOps.
- Communicate complex concepts to diverse stakeholders.
Qualifications:
- Bachelor’s/Master’s in Computer Science or related field.
- 12+ years in software development/architecture with strong AI/ML focus.
- Experience in healthcare IT and compliance (HIPAA).
- Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
- Hands-on with GCP (preferred) or other cloud platforms.
- Strong leadership, problem-solving, and communication skills.


Python Developer Job Description
A Python Developer is responsible for designing, developing, and deploying software applications using the Python programming language. Here's a brief overview:
Key Responsibilities
- Software Development: Develop high-quality software applications using Python.
- Problem-Solving: Solve complex problems using Python programming language.
- Code Maintenance: Maintain and update existing codebases to ensure they remain efficient and scalable.
- Collaboration: Collaborate with cross-functional teams to identify and prioritize project requirements.
- Testing and Debugging: Write unit tests and debug applications to ensure high-quality code.
Technical Skills
- Python: Strong understanding of Python programming language and its ecosystem.
- Programming Fundamentals: Knowledge of programming fundamentals, including data structures, algorithms, and object-oriented programming.
- Frameworks and Libraries: Familiarity with popular Python frameworks and libraries, such as Django, Flask, or Pandas.
- Database Management: Understanding of database management systems, including relational databases and NoSQL databases.
- Version Control: Knowledge of version control systems, including Git.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.

Job Summary:
We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.
Key Responsibilities:
- Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
- Perform data preprocessing, feature engineering, and exploratory data analysis.
- Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
- Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
- Optimize model performance and ensure robustness in real-time environments.
- Maintain clear documentation of code, models, and processes.
Required Skills:
- Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
- Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
- Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
- Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
- Solid grasp of RESTful API development and integration.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
- 2–5 years of experience in Python development with a focus on AI/ML.
- Exposure to MLOps practices and model monitoring tools.

5+ years of IT development experience with min 3+ years hands-on experience in Snowflake · Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. · Strong experience with building productionized data ingestion and data pipelines in Snowflake · Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities · Should have good exp on Snowflake RBAC and data security. · Strong experience in Snowflake features including new snowflake features. · Should have good experience in Python/Pyspark. · Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) · Should have experience/knowledge in orchestration and scheduling tools experience like Airflow · Should have good understanding on ETL or ELT processes and ETL tools.

Responsibilities:
- Develop, maintain and manage advanced reporting, analytics, dashboards in Tableau or PowerBI
- Ability to identify right data Visualization based on Business requirement.
- Perform and document data analysis, data validation and data mapping design
- Review and improve existing reports, dashboards, and analytics systems.
- Help optimise Tableau reports/dashboards performance on the server
- Develop presentations and documents that will have an impact
- Communicate complex topics to the team through both written and oral communications
- Ensure that project deliverable meet business requirements and ensure to complete the project within assigned timelines

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


Genspark is hiring Professionals for C Development for there Premium Client
Work Location- Chennai
Entry Criteria
Graduate from Any Engineering Background /BSc/MSc /MCA with specialization(Computer/Electronics/IT )
Minimum 1 year experience in Industry
Working Knowledge of C/Embedded/C++/DSA
Programming Aptitude (Any Language)
Basic understanding of programming constructs: variables, loops, conditionals, functions
Logical thinking and algorithmic approach
Computer Science Fundamentals:
Data structures basics: arrays, stacks, queues, linked lists
Operating System basics: what is a process/thread, memory, file system, etc.
Basic understanding of compilation, runtime, networking and sockets etc.
Problem Solving & Logical Reasoning
Ability to trace logic, find errors, and reason through pseudocode
Analytical and debugging capabilities
Learning Attitude & Communication
Demonstrated interest in low-level or systems programming (even if no experience)
Willingness to learn C and work close to the OS level
Clarity of thought and ability to explain what they do know
Soft Skills :
Able to explain and communicate the thoughts clearly in English
Confident in solving new problems independently or with guidance
Willingness to take feedback and iterate
Evaluation Process
Candidates will be assigned an online test, followed by Technical Screening.
Shortlisted Candidates will have to appear for a F2F Interview with the Client, Chennai.


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.


Job Title : Senior Machine Learning Engineer
Experience : 8+ Years
Location : Chennai
Notice Period : Immediate Joiners Only
Work Mode : Hybrid
Job Summary :
We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.
The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.
Mandatory Skills :
- Programming Languages : Python
- Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
- ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
- Operating Systems : RHEL or any Unix-based OS
- Databases : Oracle or any relational database
- Version Control : Git
- Development Methodologies : Agile
Desired Skills :
- Experience with issue tracking tools such as Azure DevOps or JIRA.
- Understanding of data science concepts.
- Familiarity with Big Data algorithms, models, and libraries.

Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills


- Design and implement cloud solutions, build MLOps on Azure
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality
- Data science models testing, validation and tests automation
- Deployment of code and pipelines across environments
- Model performance metrics
- Service performance metrics
- Communicate with a team of data scientists, data engineers and architect, document the processes
Job Title: QA Intern
Location: Chennai (Work From Office – Let’s ensure quality together!)
Duration: 6 months (High-performing interns may be offered a full-time role)
Stipend: INR 10,000/- per month
About the Company:
F22 Labs GLOBAL is a startup software studio based out of Chennai. We are the rocket fuel for other startups across the world, powering them with extremely high-quality software. We help entrepreneurs build their vision into beautiful software products (web/mobile). If you're into creating beautiful software and solving real problems, you’ll fit right in with us. Let’s make cool things happen!
Position Overview:
Are you detail-oriented, curious, and excited about breaking things (in a good way)? As a QA Intern at F22 Labs, you’ll learn the ropes of software testing and contribute to ensuring that our web and mobile applications meet the highest standards. You'll work alongside experienced QA professionals, developers, and designers to help us release top-quality products. If you’re looking to launch your career in quality assurance and make a real impact — we want you on board!
Key Responsibilities:
- Test Case Design: Learn to write clear and concise test cases for web and mobile applications (we’ll guide you every step of the way!).
- Manual Testing: Execute manual functional, regression, and exploratory tests to uncover bugs and ensure everything works smoothly.
- Bug Reporting: Log bugs using tools like JIRA or ClickUp and collaborate with developers to verify fixes.
- Requirement Understanding: Participate in team discussions to understand product features and translate them into test scenarios.
- API Testing Exposure: Assist in testing APIs using tools like Postman (we’ll teach you how to validate backend responses).
- Cross-Browser & Device Testing: Help ensure our applications work seamlessly across browsers and mobile devices.
- Test Documentation: Maintain basic test reports and update defect logs to keep things organized.
- Team Collaboration: Work closely with QA mentors, developers, and designers in a fun, fast-paced environment.
Skills Required:
- Basic knowledge of manual testing concepts (functional, regression, exploratory).
- Familiarity with bug tracking tools (JIRA, ClickUp, or similar).
- Understanding of SDLC and STLC.
- Understanding of web and mobile application workflows.
- Awareness of cross-browser/device testing.
- Strong attention to detail and willingness to learn quickly.
- Strong communication and analytical thinking skills.
- Good to have:
- Basic knowledge with test automation tools such as Selenium.
- Exposure to any programming language such as Java or Python for writing simple automation scripts.
Why Join Us (Perks & Benefits):
- Mentorship and hands-on training (learn from experienced QA professionals).
- Opportunity to work on real projects from day one.
- Supportive and collaborative team environment.
- Exposure to fast-paced startup culture.
- Possibility of full-time placement post internship based on performance.
- Paid internship.
If you're ready to kickstart your QA career in a dynamic, high-growth environment — apply today and help us build software that works beautifully!


- Design, develop, and maintain data pipelines and ETL workflows on AWS platform
- Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics
- Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements
- Optimize data workflows for performance, scalability, and reliability
- Troubleshoot data issues, monitor jobs, and ensure data quality and integrity
- Write efficient SQL queries and automate data processing tasks
- Implement data security and compliance best practices
- Maintain technical documentation and data pipeline monitoring dashboards

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.


Are you passionate about the power of data and excited to leverage cutting-edge AI/ML to drive business impact? At Poshmark, we tackle complex challenges in personalization, trust & safety, marketing optimization, product experience, and more.
Why Poshmark?
As a leader in Social Commerce, Poshmark offers an unparalleled opportunity to work with extensive multi-platform social and commerce data. With over 130 million users generating billions of daily events and petabytes of rapidly growing data, you’ll be at the forefront of data science innovation. If building impactful, data-driven AI solutions for millions excites you, this is your place.
What You’ll Do
- Drive end-to-end data science initiatives, from ideation to deployment, delivering measurable business impact through projects such as feed personalization, product recommendation systems, and attribute extraction using computer vision.
- Collaborate with cross-functional teams, including ML engineers, product managers, and business stakeholders, to design and deploy high-impact models.
- Develop scalable solutions for key areas like product, marketing, operations, and community functions.
- Own the entire ML Development lifecycle: data exploration, model development, deployment, and performance optimization.
- Apply best practices for managing and maintaining machine learning models in production environments.
- Explore and experiment with emerging AI trends, technologies, and methodologies to keep Poshmark at the cutting edge.
Your Experience & Skills
- Ideal Experience: 6-9 years of building scalable data science solutions in a big data environment. Experience with personalization algorithms, recommendation systems, or user behavior modeling is a big plus.
- Machine Learning Knowledge: Hands-on experience with key ML algorithms, including CNNs, Transformers, and Vision Transformers. Familiarity with Large Language Models (LLMs) and techniques like RAG or PEFT is a bonus.
- Technical Expertise: Proficiency in Python, SQL, and Spark (Scala or PySpark), with hands-on experience in deep learning frameworks like PyTorch or TensorFlow. Familiarity with ML engineering tools like Flask, Docker, and MLOps practices.
- Mathematical Foundations: Solid grasp of linear algebra, statistics, probability, calculus, and A/B testing concepts.
- Collaboration & Communication: Strong problem-solving skills and ability to communicate complex technical ideas to diverse audiences, including executives and engineers.

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on
Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.
Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.
Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.
Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.
Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.
Preferably possess a valid RHCE (Red Hat Certified Engineer) certification
Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)
Must possess advanced knowledge in TCP/IP concepts and fundamentals. Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.
Fundamental understanding of proxy & PAC file.
Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.
Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.
Coding skills such as Perl, Python, Shell scripting will be advantageous.
Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.
Excellent communication skills – oral, written and presentation, across various types of target audiences.
Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.

What We’re Looking For
- 4+ years of backend development experience in scalable web applications.
- Strong expertise in Python, Django ORM, and RESTful API design.
- Familiarity with relational databases like PostgreSQL and MySQL databases
- Comfortable working in a startup environment with multiple priorities.
- Understanding of cloud-native architectures and SaaS models.
- Strong ownership mindset and ability to work with minimal supervision.
- Excellent communication and teamwork skills.


Role: Python Full Stack Developer with React
Hybrid: 2 days in a week (Noida, Bangalore, Chennai, Hyderabad)
Experience: 5+ Years
Contract Duration: 6 Months
Notice0 less than 15 days

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

What you’ll do
- Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
- Work closely with business, product, analytics, and ML teams to define data needs
- Ensure high data quality, lineage, versioning, and observability
- Optimize performance of batch and streaming jobs
- Automate and scale ingestion, transformation, and monitoring workflows
- Document data models and key business metrics in a self-serve way
- Use AI tools to accelerate development, troubleshooting, and documentation
Must-Haves:
- 2–4 years of experience as a data engineer (product or analytics-focused preferred)
- Solid hands-on experience with Python and SQL
- Experience with data pipeline orchestration tools like Airflow or Prefect
- Understanding of data modeling, warehousing concepts, and performance optimization
- Familiarity with cloud platforms (GCP, AWS, or Azure)
- Bachelor's in Computer Science, Data Engineering, or a related field
- Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities

Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid
Duration: Full time
Job Summary:
About the Role:
We are growing our Data Science and Data Engineering team and are looking for an
experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.
Responsibilities:
· Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.
· Analyze existing data entry workflows and propose automation opportunities.
Design:
· Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.
· Collaborate with clients to define project scopes and objectives.
Technology Selection:
· Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.
· Ensure seamless integration with existing systems and workflows.
Prototyping and Testing:
· Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.
· Conduct rigorous testing to ensure accuracy and reliability.
Implementation and Integration:
· Work closely with clients and IT teams to integrate AI solutions effectively.
· Provide technical support during the implementation phase.
Training and Documentation:
· Develop training materials for end-users and support staff.
· Create comprehensive documentation for implemented solutions.
Continuous Improvement:
· Monitor and optimize the performance of deployed solutions.
· Identify opportunities for further automation and improvement.
Qualifications:
· Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).
· Proven experience in developing and implementing AI solutions for data entry automation.
· Expertise in NLP, LLM and other machine-learning techniques.
· Strong programming skills, especially in Python.
· Familiarity with healthcare data privacy and regulatory requirements.
Additional Qualifications( great to have):
An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst


Job description
We are looking for an experienced Python developer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers.
Responsibilities:
- Coordinating with development teams to determine application requirements.
- Writing scalable code using Python programming language.
- Testing and debugging applications.
- Developing back-end components.
- Integrating user-facing elements using server-side logic.
- Assessing and prioritizing client feature requests.
- Integrating data storage solutions.
- Coordinating with front-end developers.
- Reprogramming existing databases to improve functionality.
- Developing digital tools to monitor online traffic.
Requirements:
- Bachelor's degree in Computer Science, Computer Engineering, or related field.
- 2-7 years of experience as a Python Developer.
- Expert knowledge of Python and Flask framework and Fast API.
- Solid experience in MongoDB, Elastic Search.
- Work experience in Restful API
- A deep understanding and multi-process architecture and the threading limitations of Python.
- Ability to integrate multiple data sources into a single system.
- Familiarity with testing tools.
- Ability to collaborate on projects and work independently when required.
- Excellent troubleshooting skills.
- Good project management skills.
SKILLS:
- PHYTHON
- MONGODB
- FLASK
- REST API DEVELOPMENT
- TWILIO
Job Type: Full-time
Pay: ₹10,000.00 - ₹30,000.00 per month
Benefits:
- Flexible schedule
- Paid time off
Schedule:
- Day shift
Supplemental Pay:
- Overtime pay
Ability to commute/relocate:
- Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required)
Experience:
- Python: 1 year (Required)
Work Location: In person

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!


Key Responsibilities:
- Develop and maintain scalable Python applications for AI/ML projects.
- Design, train, and evaluate machine learning models for classification, regression, NLP, computer vision, or recommendation systems.
- Collaborate with data scientists, ML engineers, and software developers to integrate models into production systems.
- Optimize model performance and ensure low-latency inference in real-time environments.
- Work with large datasets to perform data cleaning, feature engineering, and data transformation.
- Stay current with new developments in machine learning frameworks and Python libraries.
- Write clean, testable, and efficient code following best practices.
- Develop RESTful APIs and deploy ML models via cloud or container-based solutions (e.g., AWS, Docker, Kubernetes).
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN

CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.
Job Responsibilities
- Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
- Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
- Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
- Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
- Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
- Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
- Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.
Requirements
- Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
- 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
- Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
- Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
- Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
- Strong communication skills, adept at presenting complex technical solutions to diverse audiences.
About Us
CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.
Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.
Website:
Benefits
- Competitive Salary
- An opportunity to be part of the Core team in a fast-growing company
- A fulfilling, challenging and flexible work experience
- Practically unlimited professional and career growth opportunities
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Troubleshoot and debug existing backend systems, ensuring reliability and performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.



About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact