50+ Python Jobs in Chennai | Python Job openings in Chennai
Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.



About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

We are seeking a Full Stack Developer with exceptional communication skills to collaborate daily with our international clients in the US and Australia. This role requires not only technical expertise but also the ability to clearly articulate ideas, gather requirements, and maintain strong client relationships. Communication is the top priority.
The ideal candidate is passionate about technology, eager to learn and adapt to new stacks, and capable of delivering scalable, high-quality solutions across the stack.
Key Responsibilities
- Client Communication: Act as a daily point of contact for clients (US & Australia), ensuring smooth collaboration and requirement gathering.
- Backend Development:
- Design and implement REST APIs and GraphQL endpoints.
- Integrate secure authentication methods including OAuth, Passwordless, and Signature-based authentication.
- Build scalable backend services with Node.js and serverless frameworks.
- Frontend Development:
- Develop responsive, mobile-friendly UIs using React and Tailwind CSS.
- Ensure cross-browser and cross-device compatibility.
- Database Management:
- Work with RDBMS, NoSQL, MongoDB, and DynamoDB.
- Cloud & DevOps:
- Deploy applications on AWS / GCP / Azure (knowledge of at least one required).
- Work with CI/CD pipelines, monitoring, and deployment automation.
- Quality Assurance:
- Write and maintain unit tests to ensure high code quality.
- Participate in code reviews and follow best practices.
- Continuous Learning:
- Stay updated on the latest technologies and bring innovative solutions to the team.
Must-Have Skills
- Excellent communication skills (verbal & written) for daily client interaction.
- 2+ years of experience in full-stack development.
- Proficiency in Node.js and React.
- Strong knowledge of REST API and GraphQL development.
- Experience with OAuth, Passwordless, and Signature-based authentication methods.
- Database expertise with RDBMS, NoSQL, MongoDB, DynamoDB.
- Experience with Serverless Framework.
- Strong frontend skills: React, Tailwind CSS, responsive design.
Nice-to-Have Skills
- Familiarity with Python for backend or scripting.
- Cloud experience with AWS, GCP, or Azure.
- Knowledge of DevOps practices and CI/CD pipelines.
- Experience with unit testing frameworks and TDD.
Who You Are
- A confident communicator who can manage client conversations independently.
- Passionate about learning and experimenting with new technologies.
- Detail-oriented and committed to delivering high-quality software.
- A collaborative team player who thrives in dynamic environments.


Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana:
Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill


What you’ll do here:
- Develop and maintain software applications using Python
- Collaborate with cross-functional teams to define software requirements and design specifications
- Conduct code reviews and provide constructive feedback to team members
- Troubleshoot and debug software issues, identify root causes, and implement effective solutions
- Contribute to the design and architecture of software systems
- Perform unit testing and integration testing to ensure software quality and reliability
- Keep up-to-date with the latest trends and best practices in software development
- Create and maintain detailed technical documentation for system designs, processes, and applications
- Mentor junior developers and provide technical guidance to ensure the delivery of high-quality solutions.
What you will need to thrive:
- Bachelor's degree in Computer Science or a related field
- 3+ years of Proven experience as a Python Engineer or similar role
- Strong understanding of relational databases like MySQL and NoSQL
- Experience with software development methodologies and best practices
- Solid knowledge of relational databases and SQL
- Exposure to front-end technologies such as JavaScript and React
- Flexibility to adapt to changing priorities and handle multiple tasks simultaneously
- Proven experience in mentoring junior developers and fostering a culture of continuous learning.
- Attention to detail and a commitment to delivering high-quality software solutions

Role Overview:
We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.
Responsibilities:
- Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
- Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
- Define data governance policies and procedures to ensure data quality, security, and compliance.
- Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
- Develop and execute data migration strategies to Oracle Cloud.
- Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
- Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
- Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
- Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
- Ensure the performance and reliability of data visualization dashboards and reports.
- Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
- Troubleshoot data-related issues and provide timely resolutions.
- Document data architectures, data flows, and data visualization solutions.
- Participate in the evaluation and selection of new data technologies and tools.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
- Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role.
- Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
- Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
- Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
- Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
- Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
- Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights.
- Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences.
- Experience with data governance and data quality principles.
- Familiarity with agile development methodologies.
- Ability to work independently and collaboratively within a team environment.
Application Link- https://forms.gle/km7n2WipJhC2Lj2r5



About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment


Technical Lead
The ideal candidate should possess the following qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Experience: 9+ years in software development with a proven track record of delivering scalable applications.
- Leadership Skills: 4+ years of experience in a technical leadership role, demonstrating strong mentoring abilities.
- Technical Lead must Lead and mentor a team of software developers and validation engineers.
- Technical Skills: Technical Lead must have Proficiency in programming languages such as C#, React js, SQL, MySQL, Javascript, Web API are required .NET, or Python, along with frameworks and tools used in software development.
- Technical Lead must have General working knowledge of Selenium to support current business automation tools and future automation requirements.
- General working knowledge of PHP desired to support current legacy applications which are on the roadmap for future modernization.
- Technical Lead must have Strong understanding of software development lifecycle (SDLC).
- Experience with agile methodologies (Scrum/Kanban or similar).
- Knowledge of version control systems (Git or similar).
- Development Methodologies: Experience with Agile development methodologies and experience with CI/CD pipelines.
- Problem-Solving Skills: Strong analytical and problem-solving abilities that enable the identification of complex technical issues.
- Collaboration: Excellent communication and collaboration skills, with the ability to work effectively within a team environment.
- Innovation: A passion for technology and innovation, with a keen interest in exploring new technologies to find the best solutions.


What You’ll Do
- Build & tune models: embeddings, transformers, retrieval pipelines, evaluation frameworks.
- Architect Python services (FastAPI/Flask) to embed ML/LLM workflows end-to-end.
- Translate AI research into production features for data extraction, document reasoning, and risk analytics.
- Own the full user flow: back-end → front-end (React/TS) → CI/CD on Azure & Docker.
- Leverage AI coding tools (Copilot, Cursor, Jules) to meet our 1 dev = 4 devs productivity bar.
Core Tech Stack:
- Primary:
Python · FastAPI/Flask · Pandas · SQL/NoSQL · Hugging Face · LangChain/RAG · REST/GraphQL · Azure · Docker
- Bonus:
React.js · Vector Databases · Kubernetes
You Bring:
- Proven track record shipping Python features and training/serving ML or LLM models.
- Comfort reading research papers/blogs, prototyping ideas, and measuring model performance.
- 360° product mindset: tests, reviews, secure code, quick iterations.
- Strong ownership and output focus — impact beats years of experience.
Why Join Intain?
- Small, expert team where your code and models hit production fast.
- Work on real-world AI problems powering billions in structured-finance transactions.
- Compensation & ESOPs tied directly to the value you ship.
📍 About Us
Intain is transforming structured finance using AI — from data ingestion to risk analytics. Our platform, powered by IntainAI and Ida, helps institutions manage and scale transactions seamlessly.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.
Nice to have Qualifications :
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning and NLP will be a plus.


Job Title: Software Engineer Consultant/Expert 34192
Location: Chennai
Work Type: Onsite
Notice Period: Immediate Joiners only or serving candidates upto 30 days.
Position Description:
- Candidate with strong Python experience.
- Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
- This is a Tech Anchor role.
Experience Required:
- 7 Plus Years



Key Responsibilities: 34249
- Feature Development: Design, develop, and maintain new features and enhancements across the stack.
- Front-End: Build intuitive, responsive UIs using Angular or React.
- Back-End: Develop scalable APIs and services using Python (preferred), Java/Spring, or Node.js.
- Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP) — familiarity with services like App Engine, Cloud Functions, Kubernetes is expected.
- Performance Tuning: Identify and optimize performance bottlenecks.
- Code Quality: Participate in code reviews and maintain high standards through unit testing and automation.
- DevOps & CI/CD: Collaborate on deployment pipelines using Tekton, Terraform, and other DevOps tools.
- Cross-Functional Collaboration: Work closely with Product Managers, UI/UX Designers, and fellow Engineers in an agile environment.
Must-Have Skills:
- Strong development expertise in Python (preferred), Angular, and GCP
- Understanding of DevOps practices
- Experience with SDLC, agile methodologies, and unit testing
Good to Have (Nice-to-Haves):
- Hands-on experience with:
- Tekton, Terraform, CI/CD pipelines
- Large Language Models (LLMs) integration
- AWS/Azure (in addition to GCP)
- Contributions to open-source projects
- Familiarity with API design and microservices architecture
Educational Qualification:
- Required: Bachelor’s Degree in Computer Science, Engineering, or related discipline

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.

· Design, develop, and implement AI/ML models and algorithms.
· Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.
· Write clean, efficient, and well-documented code.
· Collaborate with data engineers to ensure data quality and availability for model training and evaluation.
· Work closely with senior team members to understand project requirements and contribute to technical solutions.
· Troubleshoot and debug AI/ML models and applications.
· Stay up-to-date with the latest advancements in AI/ML.
· Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.
· Develop and deploy AI solutions on Google Cloud Platform (GCP).
· Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.
· Utilize Vertex AI for model training, deployment, and management.
· Integrate and leverage Google Gemini for specific AI functionalities.
Qualifications:
· Bachelor’s degree in computer science, Artificial Intelligence, or a related field.
· 3+ years of experience in developing and implementing AI/ML models.
· Strong programming skills in Python.
· Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
· Good understanding of machine learning concepts and techniques.
· Ability to work independently and as part of a team.
· Strong problem-solving skills.
· Good communication skills.
· Experience with Google Cloud Platform (GCP) is preferred.
· Familiarity with Vertex AI is a plus.

- Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
- Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues
- Develop and maintain APIs and Services in Node.js/Python
- Develop and maintain web-based UI’s using front-end frameworks
- Participate in code reviews, unit testing and integration testing
- Participate in the full software development lifecycle, from concept and design to implementation and support
- Ensure application performance, scalability, and security through best practices in coding, testing and deployment
- Collaborate with DevOps team for troubleshooting deployment issues
Qualification
● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration
● Proficiency in Node.js, Typescript, React, Express framework
● In-depth knowledge of databases such as MongoDB
● Proficient in HTML5, CSS3, and responsive UI design
● Proficiency in any Python development framework is a plus
● Strong direct experience in functional and object oriented programming using Javascript
● Experience with cloud platforms (Azure preferred)
● Microservices architecture and containerization
● Expertise in performance monitoring, tuning, and optimization
● Understanding of DevOps practices for automated deployments
● Understanding of software design patterns and best practices
● Practical experience working in Agile developments (scrum)
● Excellent critical thinking skills and the ability to mentor junior team members
● Effectively communicate and collaborate with cross-functional teams
● Strong capability to work independently and deliver results within tight deadlines
● Strong problem-solving abilities and attention to detail


Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.
Responsibilities:
Ensure the quality of product feature development.
Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.
Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.
Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.
Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.
Collaboration: Participate in project planning and execution with the team for efficient project delivery.
Requirements:
A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.
2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.
A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.
Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).
Good team player with a proactive approach to continuous learning.
Sound understanding of the Agile software development methodology.
Experience in a SaaS-based product company or a fast-paced startup environment is a plus.

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.

Job Title: AI Solutioning Architect – Healthcare IT
Role Summary:
The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).
Key Responsibilities:
- Architect scalable AI solutions from data ingestion to deployment.
- Align AI initiatives with business objectives and regulatory requirements (HIPAA).
- Collaborate with cross-functional teams to deliver AI projects.
- Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
- Mentor technical teams and ensure best practices in MLOps.
- Communicate complex concepts to diverse stakeholders.
Qualifications:
- Bachelor’s/Master’s in Computer Science or related field.
- 12+ years in software development/architecture with strong AI/ML focus.
- Experience in healthcare IT and compliance (HIPAA).
- Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
- Hands-on with GCP (preferred) or other cloud platforms.
- Strong leadership, problem-solving, and communication skills.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.

Job Summary:
We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.
Key Responsibilities:
- Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
- Perform data preprocessing, feature engineering, and exploratory data analysis.
- Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
- Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
- Optimize model performance and ensure robustness in real-time environments.
- Maintain clear documentation of code, models, and processes.
Required Skills:
- Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
- Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
- Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
- Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
- Solid grasp of RESTful API development and integration.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
- 2–5 years of experience in Python development with a focus on AI/ML.
- Exposure to MLOps practices and model monitoring tools.


Genspark is hiring Professionals for C Development for there Premium Client
Work Location- Chennai
Entry Criteria
Graduate from Any Engineering Background /BSc/MSc /MCA with specialization(Computer/Electronics/IT )
Minimum 1 year experience in Industry
Working Knowledge of C/Embedded/C++/DSA
Programming Aptitude (Any Language)
Basic understanding of programming constructs: variables, loops, conditionals, functions
Logical thinking and algorithmic approach
Computer Science Fundamentals:
Data structures basics: arrays, stacks, queues, linked lists
Operating System basics: what is a process/thread, memory, file system, etc.
Basic understanding of compilation, runtime, networking and sockets etc.
Problem Solving & Logical Reasoning
Ability to trace logic, find errors, and reason through pseudocode
Analytical and debugging capabilities
Learning Attitude & Communication
Demonstrated interest in low-level or systems programming (even if no experience)
Willingness to learn C and work close to the OS level
Clarity of thought and ability to explain what they do know
Soft Skills :
Able to explain and communicate the thoughts clearly in English
Confident in solving new problems independently or with guidance
Willingness to take feedback and iterate
Evaluation Process
Candidates will be assigned an online test, followed by Technical Screening.
Shortlisted Candidates will have to appear for a F2F Interview with the Client, Chennai.


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.


Job Title : Senior Machine Learning Engineer
Experience : 8+ Years
Location : Chennai
Notice Period : Immediate Joiners Only
Work Mode : Hybrid
Job Summary :
We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.
The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.
Mandatory Skills :
- Programming Languages : Python
- Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
- ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
- Operating Systems : RHEL or any Unix-based OS
- Databases : Oracle or any relational database
- Version Control : Git
- Development Methodologies : Agile
Desired Skills :
- Experience with issue tracking tools such as Azure DevOps or JIRA.
- Understanding of data science concepts.
- Familiarity with Big Data algorithms, models, and libraries.

Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on
Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.
Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.
Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.
Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.
Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.
Preferably possess a valid RHCE (Red Hat Certified Engineer) certification
Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)
Must possess advanced knowledge in TCP/IP concepts and fundamentals. Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.
Fundamental understanding of proxy & PAC file.
Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.
Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.
Coding skills such as Perl, Python, Shell scripting will be advantageous.
Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.
Excellent communication skills – oral, written and presentation, across various types of target audiences.
Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.

What We’re Looking For
- 4+ years of backend development experience in scalable web applications.
- Strong expertise in Python, Django ORM, and RESTful API design.
- Familiarity with relational databases like PostgreSQL and MySQL databases
- Comfortable working in a startup environment with multiple priorities.
- Understanding of cloud-native architectures and SaaS models.
- Strong ownership mindset and ability to work with minimal supervision.
- Excellent communication and teamwork skills.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities

Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid
Duration: Full time
Job Summary:
About the Role:
We are growing our Data Science and Data Engineering team and are looking for an
experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.
Responsibilities:
· Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.
· Analyze existing data entry workflows and propose automation opportunities.
Design:
· Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.
· Collaborate with clients to define project scopes and objectives.
Technology Selection:
· Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.
· Ensure seamless integration with existing systems and workflows.
Prototyping and Testing:
· Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.
· Conduct rigorous testing to ensure accuracy and reliability.
Implementation and Integration:
· Work closely with clients and IT teams to integrate AI solutions effectively.
· Provide technical support during the implementation phase.
Training and Documentation:
· Develop training materials for end-users and support staff.
· Create comprehensive documentation for implemented solutions.
Continuous Improvement:
· Monitor and optimize the performance of deployed solutions.
· Identify opportunities for further automation and improvement.
Qualifications:
· Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).
· Proven experience in developing and implementing AI solutions for data entry automation.
· Expertise in NLP, LLM and other machine-learning techniques.
· Strong programming skills, especially in Python.
· Familiarity with healthcare data privacy and regulatory requirements.
Additional Qualifications( great to have):
An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst


Job description
We are looking for an experienced Python developer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers.
Responsibilities:
- Coordinating with development teams to determine application requirements.
- Writing scalable code using Python programming language.
- Testing and debugging applications.
- Developing back-end components.
- Integrating user-facing elements using server-side logic.
- Assessing and prioritizing client feature requests.
- Integrating data storage solutions.
- Coordinating with front-end developers.
- Reprogramming existing databases to improve functionality.
- Developing digital tools to monitor online traffic.
Requirements:
- Bachelor's degree in Computer Science, Computer Engineering, or related field.
- 2-7 years of experience as a Python Developer.
- Expert knowledge of Python and Flask framework and Fast API.
- Solid experience in MongoDB, Elastic Search.
- Work experience in Restful API
- A deep understanding and multi-process architecture and the threading limitations of Python.
- Ability to integrate multiple data sources into a single system.
- Familiarity with testing tools.
- Ability to collaborate on projects and work independently when required.
- Excellent troubleshooting skills.
- Good project management skills.
SKILLS:
- PHYTHON
- MONGODB
- FLASK
- REST API DEVELOPMENT
- TWILIO
Job Type: Full-time
Pay: ₹10,000.00 - ₹30,000.00 per month
Benefits:
- Flexible schedule
- Paid time off
Schedule:
- Day shift
Supplemental Pay:
- Overtime pay
Ability to commute/relocate:
- Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required)
Experience:
- Python: 1 year (Required)
Work Location: In person

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!


Key Responsibilities:
- Develop and maintain scalable Python applications for AI/ML projects.
- Design, train, and evaluate machine learning models for classification, regression, NLP, computer vision, or recommendation systems.
- Collaborate with data scientists, ML engineers, and software developers to integrate models into production systems.
- Optimize model performance and ensure low-latency inference in real-time environments.
- Work with large datasets to perform data cleaning, feature engineering, and data transformation.
- Stay current with new developments in machine learning frameworks and Python libraries.
- Write clean, testable, and efficient code following best practices.
- Develop RESTful APIs and deploy ML models via cloud or container-based solutions (e.g., AWS, Docker, Kubernetes).
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN

CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.
Job Responsibilities
- Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
- Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
- Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
- Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
- Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
- Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
- Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.
Requirements
- Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
- 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
- Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
- Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
- Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
- Strong communication skills, adept at presenting complex technical solutions to diverse audiences.
About Us
CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.
Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.
Website:
Benefits
- Competitive Salary
- An opportunity to be part of the Core team in a fast-growing company
- A fulfilling, challenging and flexible work experience
- Practically unlimited professional and career growth opportunities
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Troubleshoot and debug existing backend systems, ensuring reliability and performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.



About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Responsibilities:
• Analyze and understand business requirements and translate them into efficient, scalable business logic.
• Develop, test, and maintain software that meets new requirements and integrates well with existing systems.
• Troubleshoot and debug software issues and provide solutions.
• Collaborate with cross-functional teams to deliver high-quality products, including product managers, designers, and developers.
• Write clean, maintainable, and efficient code.
• Participate in code reviews and provide constructive feedback to peers.
• Communicate effectively with team members and stakeholders to understand requirements and provide updates.
Required Skills:
• Strong problem-solving skills with the ability to analyze complex issues and provide solutions.
• Ability to quickly understand new problem statements and translate them into functional business logic.
• Proficiency in at least one programming language such as Java, Node.js, or C/C++.
• Strong understanding of software development life cycle (SDLC).
• Excellent communication skills, both verbal and written.
• Team player with the ability to collaborate effectively with different teams.
Preferred Qualifications:
• Experience with Java, Golang, or Rust is a plus.
• Familiarity with cloud platforms, microservices architecture, and API development.
• Prior experience working in an agile environment.
• Strong debugging and optimization skills.
Educational Qualifications:
• Bachelor's degree in Computer Science, Engineering, related field, or equivalent work experience.