50+ Python Jobs in Chennai | Python Job openings in Chennai
Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.




Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

What you’ll do here:
•Lead the technical implementation of our platform using Python for new and existing clients
•Collaborate with internal teams to deliver seamless onboarding experiences for our clients.
•Translate business use-cases into scalable, secure, and maintainable technical solutions
•Own the technical relationship with clients, acting as a trusted advisor throughout the implementation lifecycle
•Conduct code reviews, enforce best practices, and ensure high-quality deliverables
•Mentor developers and contribute to internal tooling and automation
•Troubleshoot and resolve complex integration issues across cloud environments
What you will need to thrive:
•8+ years of software development experience.
•Strong proficiency in Python (preferred), C# .NET, or C/C++.
•Proven track record of leading technical implementations for enterprise clients.
•Deep understanding of Pandas, Numpy, RESTful APIs, webhooks, and data transformation pipelines.
•Excellent communication and stakeholder management skills.

Senior Data Engineer
Experience: 5+ years
Chennai/ Trichy (Hybrid)
Type: Fulltime
Skills: GCP + Airflow + Bigquery + Python + Docker
The Role
As a Senior Data Engineer, you own new initiatives, design and build world-class platforms to measure and optimize ad performance. You ensure industry-leading scalability and reliability of mission-critical systems processing billions of real-time transactions a day. You apply state-of-the-art technologies, frameworks, and strategies to address complex challenges with Big Data processing and analytics. You work closely with the talented engineers across different time zones in building industry-first solutions to measure and optimize ad performance.
What you’ll do
● Write solid code with a focus on high performance for services supporting high throughput and low latency
● Architect, design, and build big data processing platforms handling tens of TBs/Day, serve thousands of clients, and support advanced analytic workloads
● Providing meaningful and relevant feedback to junior developers and staying up-to-date with system changes
● Explore the technological landscape for new ways of producing, processing, and analyzing data to gain insights into both our users and our product features
● Design, develop, and test data-driven products, features, and APIs that scale
● Continuously improve the quality of deliverables and SDLC processes
● Operate production environments, investigate issues, assess their impact, and develop feasible solutions.
● Understand business needs and work with product owners to establish priorities
● Bridge the gap between Business / Product requirements and technical details
● Work in multi-functional agile teams with end-to-end responsibility for product development and delivery
Who you are
● 3-5+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python or a related language
● Love what you do and are passionate about crafting clean code, and have a steady foundation.
● Deep understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Airflow, BigQuery, Spark, Kafka Eco System ( Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale.
● Excellent SQL, DBT query writing abilities, and data understanding
● Care about agile software processes, data-driven development, reliability, and responsible experimentation
● Genuine desire to automate decision-making, processes, and workflows
● Experience working with orchestration tools like Airflow
● Good understanding of semantic layers and experience in tools like LookerML, Kube
● Excellent communication skills and a team player
● Google BigQuery or Snowflake
● Cloud environment, Google Cloud Platform
● Container technologies - Docker / Kubernetes
● Ad-serving technologies and standards
● Familiarity with AI tools like Cursor AI, CoPilot.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.



JOB DESCRIPTION/PREFERRED QUALIFICATIONS:
REQUIRED SKILLS/COMPETENCIES:
Programming Languages:
- Strong in Python, data structures, and algorithms.
- Hands-on with NumPy, Pandas, Scikit-learn for ML prototyping.
Machine Learning Frameworks:
- Understanding of supervised/unsupervised learning, regularization, feature engineering, model selection, cross-validation, ensemble methods (XGBoost, LightGBM).
Deep Learning Techniques:
- Proficiency with PyTorch or TensorFlow/Keras
- Knowledge of CNNs, RNNs, LSTMs, Transformers, Attention mechanisms.
- Familiarity with optimization (Adam, SGD), dropout, batch norm.
LLMs & RAG:
- Hugging Face Transformers (tokenizers, embeddings, model fine-tuning).
- Vector databases (Milvus, FAISS, Pinecone, ElasticSearch).
- Prompt engineering, function/tool calling, JSON schema outputs.
Data & Tools:
- SQL fundamentals; exposure to data wrangling and pipelines.
- Git/GitHub, Jupyter, basic Docker.
WHAT ARE WE LOOKING FOR?
- Solid academic foundation with strong applied ML/DL exposure.
- Curiosity to learn cutting-edge AI and willingness to experiment.
- Clear communicator who can explain ML/LLM trade-offs simply.
- Strong problem-solving and ownership mindset.
MINIMUM QUALIFICATIONS:
- Doctorate (Academic) Degree and 2 years related work experience; Master's Level Degree and related work experience of 5 years; Bachelor's Level Degree and related work experience of 7 years in building AI systems/solutions with Machine Learning, Deep Learning, and LLMs.
MUST-HAVES:
- Education/qualification: Preferably from premier Institute like IIT, IISC, IIIT, NIT and BITS. Also regional tier 1 colleges.
- Doctorate (Academic) Degree and 2 years related work experience; or Master's Level Degree and related work experience of 5 years; or Bachelor's Level Degree and related work experience of 7 years
- Min 5 yrs experience in the Mandatory Skills: Python, Deep Learning, Machine Learning, Algorithm Development and Image Processing
- 3.5 to 4 yrs proficiency with PyTorch or TensorFlow/Keras
- Candidates from engineering product companies have higher chances of getting shortlisted (current company or past experience)
QUESTIONNAIRE:
Do you have at least 5 years of experience with Python, Deep Learning, Machine Learning, Algorithm Development, and Image Processing? Please mention the skills and years of experience:
Do you have experience with PyTorch or TensorFlow / Keras?
- PyTorch
- TensorFlow / Keras
- Both
How many years of experience do you have with PyTorch or TensorFlow / Keras?
- Less than 3 years
- 3 to 3.5 years
- 3.5 to 4 years
- More than 4 years
Is the candidate willing to relocate to Chennai?
- Ready to relocate
- Based in Chennai
What type of company have you worked for in your career?
- Service-based IT company
- Product company
- Semiconductor company
- Hardware manufacturing company
- None of the above



JOB DESCRIPTION/PREFERRED QUALIFICATIONS:
KEY RESPONSIBILITIES:
- Lead and mentor a team of algorithm engineers, providing guidance and support to ensure their professional growth and success.
- Develop and maintain the infrastructure required for the deployment and execution of algorithms at scale.
- Collaborate with data scientists, software engineers, and product managers to design and implement robust and scalable algorithmic solutions.
- Optimize algorithm performance and resource utilization to meet business objectives.
- Stay up to date with the latest advancements in algorithm engineering and infrastructure technologies and apply them to improve our systems.
- Drive continuous improvement in development processes, tools, and methodologies.
QUALIFICATIONS:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in developing computer vision and image processing algorithms and ML/DL algorithms.
- Familiar with high performance computing, parallel programming and distributed systems.
- Strong leadership and team management skills, with a track record of successfully leading engineering teams.
- Proficiency in programming languages such as Python, C++ and CUDA.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
PREFERRED QUALIFICATIONS:
- Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
- Experience with GPU architecture and algo development toolkits like Docker, Apptainer.
MINIMUM QUALIFICATIONS:
- Bachelor's degree plus 8 + years of experience
- Master's degree plus 8 + years of experience
- Familiar with high performance computing, parallel programming and distributed systems.
MUST-HAVE SKILLS:
- Phd with 6 yrs industry exp or M.Tech + 8 yrs experience or B.Tech + 10 yrs experience.
- 14 yrs exp if an IC role.
- Minimum 1 yrs experience working as a Manager/Lead
- 8 years' experience in any of the programming languages such as Python/C++/CUDA.
- 8 years' experience in Machine learning, Artificial intelligence, Deep learning.
- 2 to 3 years exp in Image processing & Computer vision is a MUST
- Product / Semi-conductor / Hardware Manufacturing company experience is a MUST. Candidates should be from engineering product companies
- Candidates from Tier 1 colleges like (IIT, IIIT, VIT, NIT) (Preferred)
- Relocation to Chennai is mandatory
NICE TO HAVE SKILLS:
- Candidates from Semicon or manufacturing companies
- Candidates with more than 8 CPGA

SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.

Description
Our engineering team is hiring a backend software engineer to contribute to the development of our Warehouse Management System (WMS) and its companion Handy Terminal device, both of which are integral to our logistics product suite. These systems are designed to seamlessly integrate with our state-of-the-art ASRS systems. The team’s mission is to build and maintain a robust, tested, and high-performance backend architecture, including databases and APIs, shared across all deployments. While the role emphasizes strong software development and engineering practices, we also value open communication and a collaborative team spirit.
In this role, you will:
- Design, develop, and maintain a key component that supports the efficient flow of supply chain operations.
- Enhance code quality and ensure comprehensive test coverage through continuous improvement.
- Collaborate effectively with cross-functional development teams to integrate solutions and align best practices.
Requirements
Minimum Qualifications:
- 2.5+ years of professional experience with Python, with a focus on versions 3.10 and above.
- Practical experience working with web frameworks such as FastAPI or Django.
- Strong understanding of SQL database principles, particularly with PostgreSQL.
- Proficiency in testing and building automation tools, including pytest, GitHub Actions, and Docker.
Bonus Points:
- Experience with NoSQL databases, particularly with Redis.
- Practical experience with asynchronous programming (e.g., asyncio) or message bus systems.
- Ability to clearly articulate technology choices and rationale (e.g., Tornado vs. Flask).
- Experience presenting at conferences or meetups, regardless of scale.
- Contributions to open-source projects.
- Familiarity with WMS concepts and logistics-related processes.
Is This the Right Role for You?
- You are motivated by the opportunity to make a tangible impact and deliver significant business value.
- You appreciate APIs that are thoughtfully designed with clear, well-defined objectives.
- You thrive on understanding how your work integrates and contributes to a larger, cohesive system.
- You are proactive and self-directed, identifying potential issues and gaps before they become problems in production.
Benefits
- Competitive salary package.
- Opportunity to work with a highly talented and diverse team.
- Comprehensive visa and relocation support.

Description
Our Engineering team is changing gears to meet the growing needs of our customers - from a handful of robots to hundreds of robots; from a small team to multiple squads. The team works closely with some of the premier enterprise customers in Japan to build state-of-the-art robotics solutions by leveraging rapyuta.io, our cloud robotics platform, and the surrounding ecosystem. The team’s mission is to pioneer scalable, collaborative, and flexible robotics solutions.
This role includes testing with real robots in a physical environment, testing virtual robots in a simulated environment, automating API tests, and automating systems level testing.
The ideal candidate is interested in working in a hands-on role with state-of-the-art robots.
In this role, the QA Engineer will be responsible for:
- Assisting in reviewing and analyzing the system specifications to define test cases
- Creating and maintaining test plans
- Executing test plans in a simulated environment and on hardware
- Defect tracking and generating bug and test reports
- Participating in implementing and improving QA processes
- Implementation of test automation for robotics systems
Requirements
Minimum qualifications
- 2.5+ years of technical experience in software Quality Assurance as an Individual Contributor
- Bachelor's degree in engineering, or combination of equivalent education and experience
- Experience writing, maintaining and executing software test cases, both manual and automated
- Experience writing, maintaining and executing software test cases that incorporate hardware interactions, including manual and automated tests to validate the integration of software with robotics systems
- Demonstrated experience with Python testing frameworks
- Expertise in Linux ecosystem
- Advanced knowledge of testing approaches: test levels; BDD/TDD; Blackbox/Whitebox approaches; regression testing
- Knowledge and practical experience of Agile principles and methodologies such as SCRUM
- HTTP API testing experience
Preferred qualifications
- Knowledge of HWIL, simulations, ROS
- Basic understanding of embedded systems and electronics
- Experience with developing/QA for robotics or hardware products will be a plus.
- Experience with testing frameworks such as TestNG, JUnit, Pytest, Playwright, Selenium, or similar tool
- ISTQB certification
- Japanese language proficiency
- Proficiency with version control repositories such as git
- Understanding of CI/CD systems such as: GHA; Jenkins; CircleCI
Benefits
- Competitive salary
- International working environment
- Bleeding edge technology
- Working with exceptionally talented engineers


Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

ABOUT THE JOB:
Job Title: QA Automation Specialist
Location: Teynampet, Chennai
Job Type: Full-time
Company: Gigadesk Technologies Pvt. Ltd. [Greatify.ai]
COMPANY DESCRIPTION:
At Greatify.ai, we are transforming educational institutions with cutting-edge AI-powered solutions. Our platform acts as a smart operating system for colleges, schools, and universities—enhancing learning, streamlining operations, and maximizing efficiency.
With 100+ institutions served, 100,000+ students impacted globally, and 1,000+ educators empowered, we are redefining the future of education.
COMPANY WEBSITE: https://www.greatify.ai/
JOB DESCRIPTION:
As a QA Automation Specialist at Greatify, you will be responsible for designing, building, and maintaining robust automated test frameworks and suites covering UI, API, integration, regression, and performance tests for our ed‑tech platforms. As part of an Agile, cross‑functional team, you’ll integrate automation into our CI/CD pipelines to speed up release cycles while ensuring high product quality and reliability. Your role ensures consistent quality, provides actionable insights, and champions automation best practices across the QA function.
KEY RESPONSIBILITIES:
1. Quality Assurance Strategy:
- Develop and own QA strategy for EdTech product suites.
- Work with Product and Engineering teams to define quality benchmarks and release criteria.
- Ensure quality is embedded early in the software development lifecycle.
2. Test Planning & Execution:
- Design, write, and execute test cases and scenarios—manual and automated.
- Manage regression, integration, and exploratory testing.
- Monitor test outcomes, identify risks, and mitigate issues.
3. Automation Framework Development
- Develop scalable, maintainable automation frameworks using Playwright and Selenium, structured with Cucumber (BDD) for readable test specifications.
- Write automation scripts in Python and Java, following best practices like modular design and Page Object Model
4. Bug Tracking and Reporting:
- Log, triage, and track bugs using tools like Jira.
- Generate insightful quality reports for stakeholders.
5. Usability and Functional Testing:
- Evaluate UX across web/mobile platforms.
- Support UX teams with accessibility and user satisfaction testing.
6. Collaboration and Mentoring:
- Foster a strong QA culture with best practices and collaboration.
QUALIFICATIONS:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 2+ years of QA experience with at least 2 years in automation testing.
- Proficiency in writing automation scripts using mainstream tools
- Experience in education tech systems.
- Hands-on knowledge of Agile/Scrum processes.
- Familiarity with programming languages Python and Java, using Playwright and Selenium for automation scripting, and employing JMeter or k6 with Grafana for performance testing.
- Experience setting up CI/CD pipelines via GitHub Actions and Jenkins, and managing test cases and execution tracking in ClickUp
- Experience with cross-browser and mobile automation is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and team collaboration skills.

About Us:
PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.
Role Overview:
Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.
Responsibilities:
- Assist in building and maintaining ETL/ELT pipelines for data processing
- Write SQL queries to extract and analyze data from various sources
- Support data quality checks and basic data validation processes
- Create simple dashboards and reports using visualization tools
- Learn and work with Oracle Cloud services under guidance
- Use Python for basic data manipulation and cleaning tasks
- Document data processes and maintain data dictionaries
- Collaborate with team members to understand data requirements
- Participate in troubleshooting data issues with senior support
- Contribute to data migration tasks as needed
Qualifications:
Required:
- Bachelor's degree in Computer Science, Information Systems, or related field
- around 2 years of experience in data engineering or related field
- Strong SQL knowledge and database concepts
- Comfortable with Python programming
- Understanding of data structures and ETL concepts
- Problem-solving mindset and attention to detail
- Good communication skills
- Willingness to learn cloud technologies
Preferred:
- Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
- Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
- Experience with Pandas for data manipulation
- Understanding of data warehousing concepts
- Familiarity with version control (Git)
- Academic projects or internships involving data processing
Nice-to-Have:
- Knowledge of dbt, BigQuery, or Snowflake
- Exposure to big data concepts
- Experience with Jupyter notebooks
- Comfort with AI-assisted coding tools (Copilot, GPTs)
- Personal projects showcasing data work
What We Offer:
- Mentorship from senior data engineers
- Hands-on learning with modern data stack
- Access to paid AI tools and learning resources
- Clear growth path to mid-level engineer
- Direct impact on product and data strategy
- No unnecessary meetings — focused execution
- Strong engineering culture with continuous learning opportunities

Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

We are seeking a Full Stack Developer with exceptional communication skills to collaborate daily with our international clients in the US and Australia. This role requires not only technical expertise but also the ability to clearly articulate ideas, gather requirements, and maintain strong client relationships. Communication is the top priority.
The ideal candidate is passionate about technology, eager to learn and adapt to new stacks, and capable of delivering scalable, high-quality solutions across the stack.
Key Responsibilities
- Client Communication: Act as a daily point of contact for clients (US & Australia), ensuring smooth collaboration and requirement gathering.
- Backend Development:
- Design and implement REST APIs and GraphQL endpoints.
- Integrate secure authentication methods including OAuth, Passwordless, and Signature-based authentication.
- Build scalable backend services with Node.js and serverless frameworks.
- Frontend Development:
- Develop responsive, mobile-friendly UIs using React and Tailwind CSS.
- Ensure cross-browser and cross-device compatibility.
- Database Management:
- Work with RDBMS, NoSQL, MongoDB, and DynamoDB.
- Cloud & DevOps:
- Deploy applications on AWS / GCP / Azure (knowledge of at least one required).
- Work with CI/CD pipelines, monitoring, and deployment automation.
- Quality Assurance:
- Write and maintain unit tests to ensure high code quality.
- Participate in code reviews and follow best practices.
- Continuous Learning:
- Stay updated on the latest technologies and bring innovative solutions to the team.
Must-Have Skills
- Excellent communication skills (verbal & written) for daily client interaction.
- 2+ years of experience in full-stack development.
- Proficiency in Node.js and React.
- Strong knowledge of REST API and GraphQL development.
- Experience with OAuth, Passwordless, and Signature-based authentication methods.
- Database expertise with RDBMS, NoSQL, MongoDB, DynamoDB.
- Experience with Serverless Framework.
- Strong frontend skills: React, Tailwind CSS, responsive design.
Nice-to-Have Skills
- Familiarity with Python for backend or scripting.
- Cloud experience with AWS, GCP, or Azure.
- Knowledge of DevOps practices and CI/CD pipelines.
- Experience with unit testing frameworks and TDD.
Who You Are
- A confident communicator who can manage client conversations independently.
- Passionate about learning and experimenting with new technologies.
- Detail-oriented and committed to delivering high-quality software.
- A collaborative team player who thrives in dynamic environments.


Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana:
Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill

Role Overview:
We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.
Responsibilities:
- Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
- Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
- Define data governance policies and procedures to ensure data quality, security, and compliance.
- Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
- Develop and execute data migration strategies to Oracle Cloud.
- Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
- Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
- Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
- Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
- Ensure the performance and reliability of data visualization dashboards and reports.
- Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
- Troubleshoot data-related issues and provide timely resolutions.
- Document data architectures, data flows, and data visualization solutions.
- Participate in the evaluation and selection of new data technologies and tools.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
- Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role.
- Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
- Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
- Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
- Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
- Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
- Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights.
- Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences.
- Experience with data governance and data quality principles.
- Familiarity with agile development methodologies.
- Ability to work independently and collaboratively within a team environment.
Application Link- https://forms.gle/km7n2WipJhC2Lj2r5



About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment


Technical Lead
The ideal candidate should possess the following qualifications:
- Education: Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Experience: 9+ years in software development with a proven track record of delivering scalable applications.
- Leadership Skills: 4+ years of experience in a technical leadership role, demonstrating strong mentoring abilities.
- Technical Lead must Lead and mentor a team of software developers and validation engineers.
- Technical Skills: Technical Lead must have Proficiency in programming languages such as C#, React js, SQL, MySQL, Javascript, Web API are required .NET, or Python, along with frameworks and tools used in software development.
- Technical Lead must have General working knowledge of Selenium to support current business automation tools and future automation requirements.
- General working knowledge of PHP desired to support current legacy applications which are on the roadmap for future modernization.
- Technical Lead must have Strong understanding of software development lifecycle (SDLC).
- Experience with agile methodologies (Scrum/Kanban or similar).
- Knowledge of version control systems (Git or similar).
- Development Methodologies: Experience with Agile development methodologies and experience with CI/CD pipelines.
- Problem-Solving Skills: Strong analytical and problem-solving abilities that enable the identification of complex technical issues.
- Collaboration: Excellent communication and collaboration skills, with the ability to work effectively within a team environment.
- Innovation: A passion for technology and innovation, with a keen interest in exploring new technologies to find the best solutions.


What You’ll Do
- Build & tune models: embeddings, transformers, retrieval pipelines, evaluation frameworks.
- Architect Python services (FastAPI/Flask) to embed ML/LLM workflows end-to-end.
- Translate AI research into production features for data extraction, document reasoning, and risk analytics.
- Own the full user flow: back-end → front-end (React/TS) → CI/CD on Azure & Docker.
- Leverage AI coding tools (Copilot, Cursor, Jules) to meet our 1 dev = 4 devs productivity bar.
Core Tech Stack:
- Primary:
Python · FastAPI/Flask · Pandas · SQL/NoSQL · Hugging Face · LangChain/RAG · REST/GraphQL · Azure · Docker
- Bonus:
React.js · Vector Databases · Kubernetes
You Bring:
- Proven track record shipping Python features and training/serving ML or LLM models.
- Comfort reading research papers/blogs, prototyping ideas, and measuring model performance.
- 360° product mindset: tests, reviews, secure code, quick iterations.
- Strong ownership and output focus — impact beats years of experience.
Why Join Intain?
- Small, expert team where your code and models hit production fast.
- Work on real-world AI problems powering billions in structured-finance transactions.
- Compensation & ESOPs tied directly to the value you ship.
📍 About Us
Intain is transforming structured finance using AI — from data ingestion to risk analytics. Our platform, powered by IntainAI and Ida, helps institutions manage and scale transactions seamlessly.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.

Basic Qualifications :
● Experience: 4+ years.
● Hands-on development experience with a broad mix of languages such as JAVA, Python, JavaScript, etc.
● Server-side development experience mainly in JAVA, (Python and NodeJS can be considerable)
● UI development experience in ReactJS or AngularJS or PolymerJS or EmberJS, or jQuery, etc., is good to have.
● Passion for software engineering and following the best coding concepts.
● Good to great problem solving and communication skills.
Nice to have Qualifications :
● Product and customer-centric mindset.
● Great OO skills, including design patterns.
● Experience with devops, continuous integration & deployment.
● Exposure to big data technologies, Machine Learning and NLP will be a plus.


Job Title: Software Engineer Consultant/Expert 34192
Location: Chennai
Work Type: Onsite
Notice Period: Immediate Joiners only or serving candidates upto 30 days.
Position Description:
- Candidate with strong Python experience.
- Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
- This is a Tech Anchor role.
Experience Required:
- 7 Plus Years



Key Responsibilities: 34249
- Feature Development: Design, develop, and maintain new features and enhancements across the stack.
- Front-End: Build intuitive, responsive UIs using Angular or React.
- Back-End: Develop scalable APIs and services using Python (preferred), Java/Spring, or Node.js.
- Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP) — familiarity with services like App Engine, Cloud Functions, Kubernetes is expected.
- Performance Tuning: Identify and optimize performance bottlenecks.
- Code Quality: Participate in code reviews and maintain high standards through unit testing and automation.
- DevOps & CI/CD: Collaborate on deployment pipelines using Tekton, Terraform, and other DevOps tools.
- Cross-Functional Collaboration: Work closely with Product Managers, UI/UX Designers, and fellow Engineers in an agile environment.
Must-Have Skills:
- Strong development expertise in Python (preferred), Angular, and GCP
- Understanding of DevOps practices
- Experience with SDLC, agile methodologies, and unit testing
Good to Have (Nice-to-Haves):
- Hands-on experience with:
- Tekton, Terraform, CI/CD pipelines
- Large Language Models (LLMs) integration
- AWS/Azure (in addition to GCP)
- Contributions to open-source projects
- Familiarity with API design and microservices architecture
Educational Qualification:
- Required: Bachelor’s Degree in Computer Science, Engineering, or related discipline

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.

· Design, develop, and implement AI/ML models and algorithms.
· Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.
· Write clean, efficient, and well-documented code.
· Collaborate with data engineers to ensure data quality and availability for model training and evaluation.
· Work closely with senior team members to understand project requirements and contribute to technical solutions.
· Troubleshoot and debug AI/ML models and applications.
· Stay up-to-date with the latest advancements in AI/ML.
· Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.
· Develop and deploy AI solutions on Google Cloud Platform (GCP).
· Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.
· Utilize Vertex AI for model training, deployment, and management.
· Integrate and leverage Google Gemini for specific AI functionalities.
Qualifications:
· Bachelor’s degree in computer science, Artificial Intelligence, or a related field.
· 3+ years of experience in developing and implementing AI/ML models.
· Strong programming skills in Python.
· Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.
· Good understanding of machine learning concepts and techniques.
· Ability to work independently and as part of a team.
· Strong problem-solving skills.
· Good communication skills.
· Experience with Google Cloud Platform (GCP) is preferred.
· Familiarity with Vertex AI is a plus.

- Work with a team to provide end to end solutions including coding, unit testing and defect fixes.
- Work to build scalable solutions and work with quality assurance and control teams to analyze and fix issues
- Develop and maintain APIs and Services in Node.js/Python
- Develop and maintain web-based UI’s using front-end frameworks
- Participate in code reviews, unit testing and integration testing
- Participate in the full software development lifecycle, from concept and design to implementation and support
- Ensure application performance, scalability, and security through best practices in coding, testing and deployment
- Collaborate with DevOps team for troubleshooting deployment issues
Qualification
● 1-5 years of experience as a Software Engineer or similar, focusing on software development and system integration
● Proficiency in Node.js, Typescript, React, Express framework
● In-depth knowledge of databases such as MongoDB
● Proficient in HTML5, CSS3, and responsive UI design
● Proficiency in any Python development framework is a plus
● Strong direct experience in functional and object oriented programming using Javascript
● Experience with cloud platforms (Azure preferred)
● Microservices architecture and containerization
● Expertise in performance monitoring, tuning, and optimization
● Understanding of DevOps practices for automated deployments
● Understanding of software design patterns and best practices
● Practical experience working in Agile developments (scrum)
● Excellent critical thinking skills and the ability to mentor junior team members
● Effectively communicate and collaborate with cross-functional teams
● Strong capability to work independently and deliver results within tight deadlines
● Strong problem-solving abilities and attention to detail


Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.
Responsibilities:
Ensure the quality of product feature development.
Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.
Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.
Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.
Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.
Collaboration: Participate in project planning and execution with the team for efficient project delivery.
Requirements:
A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.
2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.
A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.
Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).
Good team player with a proactive approach to continuous learning.
Sound understanding of the Agile software development methodology.
Experience in a SaaS-based product company or a fast-paced startup environment is a plus.

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.

Job Title: AI Solutioning Architect – Healthcare IT
Role Summary:
The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).
Key Responsibilities:
- Architect scalable AI solutions from data ingestion to deployment.
- Align AI initiatives with business objectives and regulatory requirements (HIPAA).
- Collaborate with cross-functional teams to deliver AI projects.
- Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
- Mentor technical teams and ensure best practices in MLOps.
- Communicate complex concepts to diverse stakeholders.
Qualifications:
- Bachelor’s/Master’s in Computer Science or related field.
- 12+ years in software development/architecture with strong AI/ML focus.
- Experience in healthcare IT and compliance (HIPAA).
- Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
- Hands-on with GCP (preferred) or other cloud platforms.
- Strong leadership, problem-solving, and communication skills.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.

Job Summary:
We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.
Key Responsibilities:
- Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
- Perform data preprocessing, feature engineering, and exploratory data analysis.
- Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
- Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
- Optimize model performance and ensure robustness in real-time environments.
- Maintain clear documentation of code, models, and processes.
Required Skills:
- Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
- Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
- Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
- Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
- Solid grasp of RESTful API development and integration.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
- 2–5 years of experience in Python development with a focus on AI/ML.
- Exposure to MLOps practices and model monitoring tools.


Genspark is hiring Professionals for C Development for there Premium Client
Work Location- Chennai
Entry Criteria
Graduate from Any Engineering Background /BSc/MSc /MCA with specialization(Computer/Electronics/IT )
Minimum 1 year experience in Industry
Working Knowledge of C/Embedded/C++/DSA
Programming Aptitude (Any Language)
Basic understanding of programming constructs: variables, loops, conditionals, functions
Logical thinking and algorithmic approach
Computer Science Fundamentals:
Data structures basics: arrays, stacks, queues, linked lists
Operating System basics: what is a process/thread, memory, file system, etc.
Basic understanding of compilation, runtime, networking and sockets etc.
Problem Solving & Logical Reasoning
Ability to trace logic, find errors, and reason through pseudocode
Analytical and debugging capabilities
Learning Attitude & Communication
Demonstrated interest in low-level or systems programming (even if no experience)
Willingness to learn C and work close to the OS level
Clarity of thought and ability to explain what they do know
Soft Skills :
Able to explain and communicate the thoughts clearly in English
Confident in solving new problems independently or with guidance
Willingness to take feedback and iterate
Evaluation Process
Candidates will be assigned an online test, followed by Technical Screening.
Shortlisted Candidates will have to appear for a F2F Interview with the Client, Chennai.


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.


Job Title : Senior Machine Learning Engineer
Experience : 8+ Years
Location : Chennai
Notice Period : Immediate Joiners Only
Work Mode : Hybrid
Job Summary :
We are seeking an experienced Machine Learning Engineer with a strong background in Python, ML algorithms, and data-driven development.
The ideal candidate should have hands-on experience with popular ML frameworks and tools, solid understanding of clustering and classification techniques, and be comfortable working in Unix-based environments with Agile teams.
Mandatory Skills :
- Programming Languages : Python
- Machine Learning : Strong experience with ML algorithms, models, and libraries such as Scikit-learn, TensorFlow, and PyTorch
- ML Concepts : Proficiency in supervised and unsupervised learning, including techniques such as K-Means, DBSCAN, and Fuzzy Clustering
- Operating Systems : RHEL or any Unix-based OS
- Databases : Oracle or any relational database
- Version Control : Git
- Development Methodologies : Agile
Desired Skills :
- Experience with issue tracking tools such as Azure DevOps or JIRA.
- Understanding of data science concepts.
- Familiarity with Big Data algorithms, models, and libraries.

Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.
Has substantial expertise in Linux OS, Https, Proxy knowledge, Perl, Python scripting & hands-on
Is responsible for the identification and selection of appropriate network solutions to design and deploy in environments based on business objectives and requirements.
Is skilled in developing, deploying, and troubleshooting network deployments, with deep technical knowledge, especially around Bootstrapping & Squid Proxy, Https, scripting equivalent knowledge. Further align the network to meet the Company’s objectives through continuous developments, improvements and automation.
Preferably 10+ years of experience in network design and delivery of technology centric, customer-focused services.
Preferably 3+ years in modern software-defined network and preferably, in cloud-based environments.
Diploma or bachelor’s degree in engineering, Computer Science/Information Technology, or its equivalent.
Preferably possess a valid RHCE (Red Hat Certified Engineer) certification
Preferably possess any vendor Proxy certification (Forcepoint/ Websense/ bluecoat / equivalent)
Must possess advanced knowledge in TCP/IP concepts and fundamentals. Good understanding and working knowledge of Squid proxy, Https protocol / Certificate management.
Fundamental understanding of proxy & PAC file.
Integration experience and knowledge between modern networks and cloud service providers such as AWS, Azure and GCP will be advantageous.
Knowledge in SaaS, IaaS, PaaS, and virtualization will be advantageous.
Coding skills such as Perl, Python, Shell scripting will be advantageous.
Excellent technical knowledge, troubleshooting, problem analysis, and outside-the-box thinking.
Excellent communication skills – oral, written and presentation, across various types of target audiences.
Strong sense of personal ownership and responsibility in accomplishing the organization’s goals and objectives. Exudes confidence, able to cope under pressure and will roll-up his/her sleeves to drive a project to success in a challenging environment.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.

What We’re Looking For
- 4+ years of backend development experience in scalable web applications.
- Strong expertise in Python, Django ORM, and RESTful API design.
- Familiarity with relational databases like PostgreSQL and MySQL databases
- Comfortable working in a startup environment with multiple priorities.
- Understanding of cloud-native architectures and SaaS models.
- Strong ownership mindset and ability to work with minimal supervision.
- Excellent communication and teamwork skills.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities