50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

Profile: Sr. Devops Engineer
Location: Gurugram
Experience: 05+ Years
Company: Watsoo
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 5+ years of proven hands-on DevOps experience.
- Strong experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.).
- Expertise in containerization & orchestration (Docker, Kubernetes, Helm).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Experience with monitoring and logging solutions (Prometheus, Grafana, ELK, CloudWatch, etc.).
- Proficiency in scripting languages (Python, Bash, or Shell).
- Knowledge of networking, security, and system administration.
- Strong problem-solving skills and ability to work in fast-paced environments.
- Troubleshoot production issues, perform root cause analysis, and implement preventive measures.
- Advocate DevOps best practices, automation, and continuous improvement

We are looking for a highly skilled Senior Full Stack Developer / Tech Lead to drive end-to-end development of scalable, secure, and high-performance applications. The ideal candidate will have strong expertise in React.js, Node.js, PostgreSQL, MongoDB, Python, AI/ML, and Google Cloud platforms (GCP). You will play a key role in architecture design, mentoring developers, ensuring best coding practices, and integrating AI/ML solutions into our products.
This role requires a balance of hands-on coding, system design, cloud deployment, and leadership.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using React.js, Node.js, PostgreSQL, and MongoDB.
- Build, consume, and optimize REST APIs and GraphQL services.
- Develop AI/ML models with Python and integrate them into production systems.
- Implement CI/CD pipelines, containerization (Docker, Kubernetes), and cloud deployments (GCP/AWS).
- Manage security, authentication (JWT, OAuth2), and performance optimization.
- Use Redis for caching, session management, and queue handling.
- Lead and mentor junior developers, conduct code reviews, and enforce coding standards.
- Collaborate with cross-functional teams (product, design, QA) for feature delivery.
- Monitor and optimize system performance, scalability, and cost-efficiency.
- Own technical decisions and contribute to long-term architecture strategy.

We are seeking a talented Full Stack Developer to design, build, and maintain scalable web and mobile applications. The ideal candidate should have hands-on experience in frontend (React.js, Flutter), backend (Node.js, Express), databases (PostgreSQL, MongoDB), and Python for AI/ML integration. You will work closely with the engineering team to deliver secure, high-performance, and user-friendly products.
Key Responsibilities
- Develop responsive and dynamic web applications using React.js and modern UI frameworks.
- Build and optimize REST APIs and backend services with Node.js and Express.js.
- Design and manage PostgreSQL and MongoDB databases, ensuring optimized queries and data modeling.
- Implement state management using Redux/Context API.
- Ensure API security with JWT, OAuth2, Helmet.js, and rate-limiting.
- Integrate Google Cloud services (GCP) for hosting, storage, and serverless functions.
- Deploy and maintain applications using CI/CD pipelines, Docker, and Kubernetes.
- Use Redis for caching, sessions, and job queues.
- Optimize frontend performance (lazy loading, code splitting, caching strategies).
- Collaborate with design, QA, and product teams to deliver high-quality features.
- Maintain clear documentation and follow coding standards.



Job Title: AI / Machine Learning Engineer
Company: Apprication Pvt Ltd
Location: Goregaon East
Employment Type: Full-time
Experience: 4 Years
About the Role
We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.
This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.
Key Responsibilities
- Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
- Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
- Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
- Build APIs and microservices for model serving, monitoring, and scaling.
- Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
- Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
- Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
- Ensure solutions meet security, compliance, and performance standards.
- Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Skills & Qualifications
- Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
- Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
- Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
- Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
- Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
- Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
- Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
- Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
- Strong understanding of data structures, algorithms, APIs, and distributed systems.
- Excellent problem-solving, analytical, and communication skills.

Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/

About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one

SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.


Simstar.co is a billion-dollar diamond enterprise, a joint venture between Sim Gems Group and Star Gems Group.
We combine decades of expertise in jewelry manufacturing and trading with modern technology and data-driven operations.
Our 25-member Tech, Product, and ERP team is the engine behind digital innovation at Simstar. We’re now looking to expand this team with a Senior Software Engineer (SSE) who can help us scale faster.
Role Overview
As a Senior Software Engineer, you’ll be responsible for building and maintaining high-performance web applications across the stack. You’ll collaborate with product managers, designers, and business stakeholders to translate complex business needs into reliable digital systems.
Key Responsibilities
- Design, build, and maintain scalable web applications end-to-end.
- Work closely with product and design teams to deliver user-centric, high-performance interfaces.
- Develop and optimize backend APIs, database queries, and integrations.
- Write clean, maintainable, and testable code following best practices.
- Mentor junior developers and contribute to team-wide tech decisions.
Requirements
- Experience: 5+ years of hands-on full-stack development experience.
- Backend: Proficiency in Python (Java, RoR, or Node.js experience is fine if you can ramp up on Python quickly).
- Frontend: Experience with React, Angular, or Vue.js.
- Database: Strong knowledge of SQL databases (MySQL, PostgreSQL, or Oracle).
- Communication: Comfortable in English or Hindi.
- Location: Bangalore, 5 days a week (Work from Office).
- Availability: Immediate joiners preferred.
Why Join Us
- Be part of a fast-growing global diamond brand backed by two industry leaders.
- Collaborate with a sharp, experienced tech and product team solving real-world business challenges.
- Work at the intersection of luxury, data, and innovation — building systems that directly impact global operations.

Experience: 3–7 Years
Locations: Pune / Bangalore / Mumbai
Notice Period :Immediate joiner only
Employment Type: Full-time
🛠️ Key Skills (Mandatory):
- Python: Strong coding skills for data manipulation and automation.
- PySpark: Experience with distributed data processing using Spark.
- SQL: Proficient in writing complex queries for data extraction and transformation.
- Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow
Interested candidates please share resume with details below.
Total Experience -
Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-
Current CTC -
Expected CTC -
Notice period -
Current Location -
Desired Location -


We are seeking a Senior Artificial Intelligence Engineer to join our team, contributing expertise in AI technologies across dynamic projects. This full-time, freelance, or remote position is available in Pune Division, Delhi, Noida, Gurgaon, Mumbai, Bengaluru, and Kolkata. Candidates should bring up to 10 years of professional experience and a strong foundation in developing and deploying advanced AI solutions in a fast-paced IT environment.
Qualifications and Skills
- Machine Learning (Mandatory skill): Hands-on experience designing, building, and optimizing machine learning models for real-world applications is essential for this role.
- Python (Mandatory skill): Exceptional proficiency in Python programming, with ability to write efficient, scalable, and maintainable code for AI projects.
- Artificial Intelligence (AI) (Mandatory skill): Proven capability to implement, train, and deploy artificial intelligence systems across diverse domains and business scenarios.
- PyTorch: Strong knowledge of PyTorch for creating, training, and fine-tuning deep learning models for various industrial use cases.
- MLOps: Familiarity with MLOps practices to automate, monitor, and manage machine learning workflows, deployments, and model lifecycle effectively.
- SQL: Adept at using SQL for extracting, analyzing, and managing large volumes of structured data within AI-related projects.
- Excellent problem-solving skills and analytical thinking to identify, develop, and implement innovative AI-powered solutions to complex business problems.
- Solid understanding of software development best practices, including version control, code documentation, and collaborative teamwork in cross-functional settings.
Roles and Responsibilities
- Design, implement, and deploy scalable machine learning and artificial intelligence models and algorithms for a wide array of use cases.
- Collaborate with product managers, data engineers, and other key stakeholders to define AI project requirements and deliver innovative outcomes.
- Analyze large and complex datasets to extract valuable insights, train models, and continuously improve model performance and accuracy.
- Adopt and implement best practices in MLOps for continuous integration, deployment, monitoring, and maintenance of AI solutions.
- Work with PyTorch and related frameworks to build, experiment, and optimize deep learning models tailored to specific business challenges.
- Communicate findings, progress, risks, and results succinctly to technical and non-technical stakeholders, guiding strategic decision-making.
- Actively research emerging trends and advancements in AI, recommending and integrating relevant tools and methodologies into ongoing projects.
- Provide technical mentorship and guidance to junior engineers, fostering an environment focused on continuous learning and growth.

Experience - 7+Yrs
Must-Have:
o Python (Pandas, PySpark)
o Data engineering & workflow optimization
o Delta Tables, Parquet
· Good-to-Have:
o Databricks
o Apache Spark, DBT, Airflow
o Advanced Pandas optimizations
o PyTest/DBT testing frameworks
Interested candidates can revert back with detail below.
Total Experience -
Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-
Current CTC -
Expected CTC -
Notice Period -LWD -
Current location -
Desired location -

Job Title : Perl Developer
Experience : 6+ Years
Engagement Type : C2C (Contract)
Location : Remote
Shift Timing : General Shift
Job Summary :
We are seeking an experienced Perl Developer with strong scripting and database expertise to support an application modernization initiative.
The role involves code conversion for compatibility between Sybase and MS SQL, ensuring performance, reliability, and maintainability of mission-critical systems.
You will work closely with the engineering team to enhance, migrate, and optimize codebases written primarily in Perl, with partial transitions toward Python for long-term sustainability.
Mandatory Skills :
Perl, Python, T-SQL, SQL Server, ADO, Git, Release Management, Monitoring Tools, Automation Tools, CI/CD, Sybase-to-MSSQL Code Conversion
Key Responsibilities :
- Analyze and convert existing application code from Sybase to MS SQL for compatibility and optimization.
- Maintain and enhance existing Perl scripts and applications.
- Where feasible, refactor or rewrite legacy components into Python for improved scalability.
- Collaborate with development and release teams to ensure seamless integration and deployment.
- Follow established Git/ADO version control and release management practices.
- (Optional) Contribute to monitoring, alerting, and automation improvements.
Required Skills :
- Strong Perl development experience (primary requirement).
- Proficiency in Python for code conversion and sustainability initiatives.
- Hands-on experience with T-SQL / SQL Server for database interaction and optimization.
- Familiarity with ADO/Git and standard release management workflows.
Nice to Have :
- Experience with monitoring and alerting tools.
- Familiarity with automation tools and CI/CD pipelines.

🚀 We’re Hiring: Python Developer – Quant Strategies & Backtesting | Mumbai (Goregaon East)
Are you a skilled Python Developer passionate about financial markets and quantitative trading?
We’re looking for someone to join our growing Quant Research & Algo Trading team, where you’ll work on:
🔹 Developing & optimizing trading strategies in Python
🔹 Building backtesting frameworks across multiple asset classes
🔹 Processing and analyzing large market datasets
🔹 Collaborating with quant researchers & traders on real-world strategies
What we’re looking for:
✔️ 3+ years of experience in Python development (preferably in fintech/trading/quant domains)
✔️ Strong knowledge of Pandas, NumPy, SciPy, SQL
✔️ Experience in backtesting, data handling & performance optimization
✔️ Familiarity with financial markets is a big plus
📍 Location: Goregaon East, Mumbai
💼 Competitive package + exposure to cutting-edge quant strategies


At krid.ai, we are at the epicenter of this shift, building the future of AI from the ground up. Our journey is led by visionary co-founders with a combined 45 years of experience, including a Half Billion Dollars (₹4,000 Cr.) Silicon Valley exit and multiple US patents, and helping scale a US-based multinational’s global business by many multiples in 10 years.
We are not just creating a company; we are shaping the future and we invite you to be a part of this incredible leap.This is more than just another internship, it's your invitation to join the agentic AI revolution. This internship offers a global exposure working with global partners across our core markets in the US, Europe, and India.
We are currently hiring for Agentic AI Intern at krid.ai for a duration of 6 months: https://forms.gle/FEmJombeHxEKo8Rn7
We have attached the link to the application above, which serves as an extremely transparent communication method about stipend, work hours, work days, etc.
PPO will be released to interns on the basis of performance and requirement of the company.

Wissen Technology is hiring for Data Engineer
About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.
Experience:7+ years
Location:Pune, Mumbai, Bangalore
Mode of Work:Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python (Pandas, PySpark).
- Optimize data workflows and ensure efficient data processing.
- Work with Delta Tables and Parquet for data storage and management.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Implement best practices for data engineering and workflow optimization.
Qualifications and Required Skills:
- Proficiency in Python, specifically with Pandas and PySpark.
- Strong experience in data engineering and workflow optimization.
- Knowledge of Delta Tables and Parquet.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a team environment.
- Strong communication skills.
Good to Have Skills:
- Experience with Databricks.
- Knowledge of Apache Spark, DBT, and Airflow.
- Advanced Pandas optimizations.
- Familiarity with PyTest/DBT testing frameworks.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
Wissen | Driving Digital Transformation
A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.

Key Responsibilities:
Design, build and maintain scalable ETL/ELT pipelines using Azure Data Factory, Azure Databricks and Spark.
Develop and optimize data workflows using SQL and Python/Scala for large-scale processing.
Implement performance tuning and optimization strategies for pipelines and Spark jobs.
Support feature engineering and model deployment workflows with data engineering teams.
Ensure data quality, validation, error-handling and monitoring are in place. Work with Delta Lake, Parquet and Big Data storage (ADLS / Blob).
Required Skills:
Azure Data Platform: Data Factory, Databricks, ADLS / Blob Storage.
Strong SQL and Python or Scala.
Big Data technologies: Spark, Delta Lake, Parquet. ETL/ELT pipeline design and data transformation expertise.
Data pipeline optimization, performance tuning and CI/CD for data workloads.
Nice-to-Have Familiarity with data governance, security and compliance in hybrid environments.

Role: Perl Developer
Location: Remote
Experience: 6–8 years
Shift: General
Job Description
Primary Skills (Must Have):
- Strong Perl development skills.
- Good knowledge of Python and T-SQL / SQL Server to create compatible code.
- Hands-on experience with ADO, Git, and release management practices.
Secondary Skills (Good to Have):
- Familiarity with monitoring/alerting tools.
- Exposure to automation tools.
Day-to-Day Responsibilities
- Perform application code conversion for compatibility between Sybase and MS SQL.
- Work on existing Perl-based codebase, ensuring maintainability and compatibility.
- Convert code into Python where feasible (as part of the migration strategy).
- Where Python conversion is not feasible, create compatible code in Perl.
- Collaborate with the team on release management and version control (Git).
We are seeking a Software Engineer in Test to join our Quality Engineering team. In this role, you will be responsible for designing, developing, and maintaining automation frameworks to enhance our test coverage and ensure the delivery of high-quality software. You will collaborate closely with developers, product managers, and other stakeholders to drive test automation strategies and improve software reliability.
Key Responsibilities
● Design, develop, and maintain robust test automation frameworks for web, API, and backend services.
● Implement automated test cases to improve software quality and test coverage.
● Develop and execute performance and load tests to ensure the application behaves reliably in self-hosted environment
environments.
● Integrate automated tests into CI/CD pipelines to enable continuous testing.
● Collaborate with software engineers to define test strategies, acceptance criteria, and quality standards.
● Conduct performance, security, and regression testing to ensure application stability.
● Investigate test failures, debug issues, and work with development teams to resolve defects.
● Advocate for best practices in test automation, code quality, and software reliability.
● Stay updated with industry trends and emerging technologies in software testing.
Qualifications & Experience
● Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
● 3+ years of experience in software test automation.
● Proficiency in programming languages such as Java, Python, or JavaScript.
● Hands-on experience with test automation tools like Selenium, Cypress, Playwright, or similar.
● Strong knowledge of API testing using tools such as Postman, RestAssured, or Karate.
● Experience with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI/CD.
● Understanding of containerization and cloud technologies (Docker, Kubernetes, AWS, or similar).
● Familiarity with performance testing tools like JMeter or Gatling is a plus.
● Excellent problem-solving skills and attention to detail.
● Strong communication and collaboration skills.



Lead technical Consultant
Experience: 9-15 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.



Senior Technical Consultant (Polyglot)
Experience- 5-9 Years
This is a Backend-heavy Polyglot developer role - 80% Backend 20% Frontend
Backend
- 1st Primary Language - Java or Python or Go Or ROR or Rust
- 2nd Primary Language - one of the above or Node
The candidate should be experienced in atleast 2 backend tech stacks.
Frontend
- React or Angular
- HTML, CSS
The interview process would be quite complex and require depth of experience. The candidate should be hands-on in backend and frontend development (80-20)
The candidate should have experience with Unit testing, CI/CD, devops etc.
Good Communication skills is a must have.


We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders

Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)

Data Engineer
Experience: 4–6 years
Key Responsibilities
- Design, build, and maintain scalable data pipelines and workflows.
- Manage and optimize cloud-native data platforms on Azure with Databricks and Apache Spark (1–2 years).
- Implement CI/CD workflows and monitor data pipelines for performance, reliability, and accuracy.
- Work with relational databases (Sybase, DB2, Snowflake, PostgreSQL, SQL Server) and ensure efficient SQL query performance.
- Apply data warehousing concepts including dimensional modelling, star schema, data vault modelling, Kimball and Inmon methodologies, and data lake design.
- Develop and maintain ETL/ELT pipelines using open-source frameworks such as Apache Spark and Apache Airflow.
- Integrate and process data streams from message queues and streaming platforms (Kafka, RabbitMQ).
- Collaborate with cross-functional teams in a geographically distributed setup.
- Leverage Jupyter notebooks for data exploration, analysis, and visualization.
Required Skills
- 4+ years of experience in data engineering or a related field.
- Strong programming skills in Python with experience in Pandas, NumPy, Flask.
- Hands-on experience with pipeline monitoring and CI/CD workflows.
- Proficiency in SQL and relational databases.
- Familiarity with Git for version control.
- Strong communication and collaboration skills with ability to work independently.

AI Engineer
Location: Bangalore
Experience: 1–3 years
Employment Type: Full-time
ABOUT AIRA AI
At AiRA, we are building the world’s most advanced personal AI, designed to blend seamlessly into daily life. Our systems remember context, anticipate what matters next, and respond with personality to make every interaction feel effortless and human. We are driven by memory, proactivity, and trust, creating an AI that feels less like a tool and more like a collaborator that is always learning, guiding, and evolving alongside you.
ROLE SUMMARY
We are looking for a highly motivated AI Engineer to join our core team. From day one, your work will help shape the intelligence of our product, working directly on advanced AI infrastructure with experienced engineers.
KEY RESPONSIBILITIES
- Design and build core AI systems that combine LLMs, retrieval, and memory to power conversational intelligence
- Prototype rapidly with new models and techniques, evaluate their performance, and productionize the best approaches
- Engineer scalable and reliable backend components in Python (FastAPI, asyncio) to support real-time user interactions
- Architect Retrieval Augmented Generation (RAG) pipelines with vector and graph databases to enable long-term context and personalization
- Work closely with product and frontend teams to integrate AI features seamlessly into the user experience
- Stay current with AI research and apply new ideas to continually improve our systems
- WHAT WE EXPECT
- Strong Python skills with the ability to ship clean, reliable, and scalable production code
- Experience designing real-time backend systems (FastAPI, asyncio) for high-performance applications
- Hands-on expertise with LLMs, from integrating APIs (OpenAI, Gemini, etc.) to evaluating and optimizing outputs for real use cases
- Ability to design effective prompts and build Retrieval Augmented Generation (RAG) pipelines using vector and graph databases
- Familiarity with intelligent agent design and multi-agent orchestration
- Solid foundation in data structures, algorithms, and system design, with the pragmatism to apply them in production
- Clear communication skills, attention to detail, and a strong sense of ownership in a fast-moving startup environment
WHY JOIN US
Impact: Your work directly shapes our AI’s intelligence
Learning: Exposure to advanced AI stacks and multi-agent systems
Culture: High standards, precision, and engineering excellence
Growth: Accelerate your career with equity and rapid responsibilities

Position Overview
We are seeking an experienced Solutions Architect to lead the technical design and implementation strategy for our finance automation platform. This role sits at the intersection of business requirements, technical architecture, and implementation excellence. You will be responsible for translating complex Statement of Work (SOW) requirements into comprehensive technical designs while mentoring implementation engineers and driving platform evolution.
Key Responsibilities
Solution Design & Architecture
1. Translate SOW requirements into detailed C4 architecture models and Business Process Canvas documentation
2. Design end-to-end solutions for complex finance automation workflows including reconciliations, book closure, and financial reporting
3. Create comprehensive technical specifications for custom development initiatives
4. Establish architectural standards and best practices for finance domain solutions
Technical Leadership & Mentorship
1. Mentor Implementation Engineers on solution design, technical approaches, and best practices
2. Lead technical reviews and ensure solution quality across all implementations
3. Provide guidance on complex technical challenges and architectural decisions
4. Foster knowledge sharing and technical excellence within the solutions team
Platform Strategy & Development
1. Make strategic decisions on when to push feature development to the Platform Team vs. custom implementation
2. Interface with Implementation Support team to assess platform gaps and enhancement opportunities
3. Collaborate with Program Managers to track and prioritize new platform feature development
4. Contribute to product roadmap discussions based on client requirements and market trends
Client Engagement & Delivery
1. Lead technical discussions with enterprise clients during pre-sales and implementation phases
2. Design scalable solutions that align with client's existing technology stack and future roadmap
3. Ensure solutions comply with financial regulations (Ind AS/IFRS/GAAP) and industry standards
4. Drive technical aspects of complex implementations from design through go-live
Required Qualifications
Technical Expertise
● 8+ years of experience in solution architecture, preferably in fintech or enterprise software
● Strong expertise in system integration, API design, and microservices architecture
● Proficiency in C4 modeling and architectural documentation standards
● Experience with Business Process Management (BPM) and workflow design
● Advanced knowledge of data architecture, ETL pipelines, and real-time data processing
● Strong programming skills in Python, Java, or similar languages
● Experience with cloud platforms (AWS, Azure, GCP) and containerization technologies.
Financial Domain Knowledge
● Deep understanding of finance and accounting principles (Ind AS/IFRS/GAAP)
● Experience with financial systems integration (ERP, GL, AP/AR systems)
● Knowledge of financial reconciliation processes and automation strategies
● Understanding of regulatory compliance requirements in financial services
Leadership & Communication
● Proven experience mentoring technical teams and driving technical excellence
● Strong stakeholder management skills with ability to communicate with C-level executives
● Experience working in agile environments with cross-functional teams
● Excellent technical documentation and presentation skills
Preferred Qualifications
● Master's degree in Computer Science, Engineering, or related technical field
● Experience with finance automation platforms (Blackline, Trintech, Anaplan, etc.)
● Certification in enterprise architecture frameworks (TOGAF, Zachman)
● Experience with data visualization tools (Power BI, Tableau, Looker)
● Background in SaaS platform development and multi-tenant architectures
● Experience with DevOps practices and CI/CD pipeline design
● Knowledge of machine learning applications in finance automation.
Skills & Competencies
Technical Skills
● Solution architecture and system design
● C4 modeling and architectural documentation
● API design and integration patterns
● Cloud-native architecture and microservices
● Data architecture and pipeline design
● Programming and scripting languages
Financial & Business Skills
● Financial process automation
● Business process modeling and optimization
● Regulatory compliance and risk management
● Enterprise software implementation
● Change management and digital transformation
Leadership Skills
● Technical mentorship and team development
● Strategic thinking and decision making
● Cross-functional collaboration
● Client relationship management
● Project and program management
Soft Skills
● Critical thinking and problem-solving
● Cross-functional collaboration
● Task and project management
● Stakeholder management
● Team leadership
● Technical documentation
● Communication with technical and non-technical stakeholders
Mandatory Criteria:
● Looking for candidates who are Solution Architects in Finance from Product Companies.
● The candidate should have worked in Fintech for at least 4–5 years.
● Candidate should have Strong Technical and Architecture skills with Finance Exposure.
● Candidate should be from Product companies.
● Candidate should have 8+ years’ experience in solution architecture, preferably in fintech or enterprise software.
● Candidate should have Proficiency in Python, Java (or similar languages) and hands-on with cloud platforms (AWS/Azure/GCP) & containerization (Docker/Kubernetes).
● Candidate should have Deep knowledge of finance & accounting principles (Ind AS/IFRS/GAAP) and financial system integrations (ERP, GL, AP/AR).
● Candidate should have Expertise in system integration, API design, microservices, and C4 modeling.
● Candidate should have Experience in financial reconciliations, automation strategies, and regulatory compliance.
● Candidate should be Strong in problem-solving, cross-functional collaboration, project management, documentation, and communication.
● Candidate should have Proven experience in mentoring technical teams and driving excellence.

Strong Software Engineering Profile
Mandatory (Experience 1): Must have 5+ years of experience using Python to design software solutions.
Mandatory (Skills 1): Strong working experience with Python (with Django framework experience) and Microservices architecture is a must.
Mandatory (Skills 2): Must have experience with event-driven architectures using Kafka
Mandatory (Skills 3): Must have Experience in DevOps practices and container orchestration using Kubernetes, along with cloud platforms like AWS, GCP, or Azure
Mandatory (Company): Product companies, Experience working in fintech, banking, or product companies is a plus.
Mandatory (Education): From IIT (Candidate should have done bachelor degree Btech or Dual degree Btech+Mtech or Intergrated Msc), From other premium institutes NIT, MNNIT, VITS, BITS (Candidates should have done B.E/B.Tech)
Preferred


Description :
Required Qualifications:
- Bachelor’s or Master’s degree in Electronics Engineering, Computer Engineering, Computer Science, or a related field.
- 5+ years of experience in developing test frameworks and automation for Android-based infotainment systems.
- Minimum 2+ years of C++ development experience with exposure to Android’s HAL.
- Proficiency in programming languages such as Java, C++, and Python.
- 1+ years of experience in Android application development.
- Strong understanding of Android Automotive System and Android Framework.
- In-depth knowledge of different Android components, including Services, Activities, Broadcast Receivers, and Content Providers.
- Strong understanding of embedded systems architecture and RTOS concepts.
- Familiarity with hardware interfaces such as I2C, SPI, UART, CAN, etc.
- Experience with version control systems (e.g., Git), CI/CD tools (e.g., Jenkins, GitLab CI), and project management tools (e.g., JIRA).
- Excellent problem-solving and debugging skills.
- Mentoring skills to support and guide less experienced team members.
- Excellent communication skills for effective interaction with stakeholders and clients.
Mandatory Criteria
• Looking for candidates who are based in Bangalore and can join us immediately.
• Need candidate from Automotive and Manufacturing industries only.
• Candidate must have Minimum 2+ years of C++ development experience with exposure to Android’s HAL (Android framework).
• Candidate must have experience in Java with Android App development expertise.
• Candidate should have experience in JIRA - Version control systems.
• Candidate should have 1+ years of experience in Android application development.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.

Description
Our engineering team is hiring a backend software engineer to contribute to the development of our Warehouse Management System (WMS) and its companion Handy Terminal device, both of which are integral to our logistics product suite. These systems are designed to seamlessly integrate with our state-of-the-art ASRS systems. The team’s mission is to build and maintain a robust, tested, and high-performance backend architecture, including databases and APIs, shared across all deployments. While the role emphasizes strong software development and engineering practices, we also value open communication and a collaborative team spirit.
In this role, you will:
- Design, develop, and maintain a key component that supports the efficient flow of supply chain operations.
- Enhance code quality and ensure comprehensive test coverage through continuous improvement.
- Collaborate effectively with cross-functional development teams to integrate solutions and align best practices.
Requirements
Minimum Qualifications:
- 2.5+ years of professional experience with Python, with a focus on versions 3.10 and above.
- Practical experience working with web frameworks such as FastAPI or Django.
- Strong understanding of SQL database principles, particularly with PostgreSQL.
- Proficiency in testing and building automation tools, including pytest, GitHub Actions, and Docker.
Bonus Points:
- Experience with NoSQL databases, particularly with Redis.
- Practical experience with asynchronous programming (e.g., asyncio) or message bus systems.
- Ability to clearly articulate technology choices and rationale (e.g., Tornado vs. Flask).
- Experience presenting at conferences or meetups, regardless of scale.
- Contributions to open-source projects.
- Familiarity with WMS concepts and logistics-related processes.
Is This the Right Role for You?
- You are motivated by the opportunity to make a tangible impact and deliver significant business value.
- You appreciate APIs that are thoughtfully designed with clear, well-defined objectives.
- You thrive on understanding how your work integrates and contributes to a larger, cohesive system.
- You are proactive and self-directed, identifying potential issues and gaps before they become problems in production.
Benefits
- Competitive salary package.
- Opportunity to work with a highly talented and diverse team.
- Comprehensive visa and relocation support.

Description
Our Engineering team is changing gears to meet the growing needs of our customers - from a handful of robots to hundreds of robots; from a small team to multiple squads. The team works closely with some of the premier enterprise customers in Japan to build state-of-the-art robotics solutions by leveraging rapyuta.io, our cloud robotics platform, and the surrounding ecosystem. The team’s mission is to pioneer scalable, collaborative, and flexible robotics solutions.
This role includes testing with real robots in a physical environment, testing virtual robots in a simulated environment, automating API tests, and automating systems level testing.
The ideal candidate is interested in working in a hands-on role with state-of-the-art robots.
In this role, the QA Engineer will be responsible for:
- Assisting in reviewing and analyzing the system specifications to define test cases
- Creating and maintaining test plans
- Executing test plans in a simulated environment and on hardware
- Defect tracking and generating bug and test reports
- Participating in implementing and improving QA processes
- Implementation of test automation for robotics systems
Requirements
Minimum qualifications
- 2.5+ years of technical experience in software Quality Assurance as an Individual Contributor
- Bachelor's degree in engineering, or combination of equivalent education and experience
- Experience writing, maintaining and executing software test cases, both manual and automated
- Experience writing, maintaining and executing software test cases that incorporate hardware interactions, including manual and automated tests to validate the integration of software with robotics systems
- Demonstrated experience with Python testing frameworks
- Expertise in Linux ecosystem
- Advanced knowledge of testing approaches: test levels; BDD/TDD; Blackbox/Whitebox approaches; regression testing
- Knowledge and practical experience of Agile principles and methodologies such as SCRUM
- HTTP API testing experience
Preferred qualifications
- Knowledge of HWIL, simulations, ROS
- Basic understanding of embedded systems and electronics
- Experience with developing/QA for robotics or hardware products will be a plus.
- Experience with testing frameworks such as TestNG, JUnit, Pytest, Playwright, Selenium, or similar tool
- ISTQB certification
- Japanese language proficiency
- Proficiency with version control repositories such as git
- Understanding of CI/CD systems such as: GHA; Jenkins; CircleCI
Benefits
- Competitive salary
- International working environment
- Bleeding edge technology
- Working with exceptionally talented engineers


Job Description
Who are you?
- SQL & CDC Pro: Strong SQL Server/T-SQL; hands-on CDC or replication patterns for initial snapshot + incremental syncs, including delete handling.
- Fabric Mirroring Practitioner: You’ve set up and tuned Fabric Mirroring to land data into OneLake/Lakehouse; comfortable with OneLake shortcuts and workspace/domain organisation.
- Schema-Drift Aware: You detect, evolve, and communicate schema changes safely (contracts, tests, alerts), minimising downstream breakage.
- High-Volume Ingestion Mindset: You design for throughput, resiliency, and backpressure—retries, idempotency, partitioning, and efficient file sizing.
- Python/Scala/Spark Capable: You can build notebooks/ingestion frameworks for advanced scenarios and data quality checks.
- Operationally Excellent: You add observability (logging/metrics/alerts), document runbooks, and partner well with platform, security, and analytics teams.
- Data Security Conscious: You respect PII/PHI, apply least privilege, and align with RLS/CLS patterns and governance guardrails.
What you will be doing?
- Stand Up Mirroring: Configure Fabric Mirroring from SQL Server (and other relational sources) into OneLake; tune schedules, snapshots, retention, and throughput.
- Land to Bronze Cleanly: Define Lakehouse folder structures, naming/tagging conventions, and partitioning for fast, organised Bronze ingestion.
- Handle Change at Scale: Implement CDC—including soft/hard deletes, late-arriving data, and backfills—using reliable watermarking and reconciliation checks.
- Design Resilient Pipelines: Build ingestion with Fabric Data Factory and/or notebooks; add retries, dead-lettering, and circuit-breaker patterns for fault tolerance.
- Manage Schema Drift: Automate drift detection and schema evolution; publish change notes and guardrails so downstream consumers aren’t surprised.
- Performance & Cost Tuning: Optimise batch sizes, file counts, partitions, parallelism, and capacity usage to balance speed, reliability, and spend.
- Observability & Quality: Instrument lineage, logs, metrics, and DQ tests (nulls, ranges, uniqueness); set up alerts and simple SLOs for ingestion health.
- Collaboration & Documentation: Partner with the Fabric Platform Architect on domains, security, and workspaces; document pipelines, SLAs, and runbooks.
Must-have skills
- SQL Server, T-SQL; CDC/replication fundamentals
- Microsoft Fabric Mirroring; OneLake/Lakehouse; OneLake shortcuts
- Schema drift detection/management and data contracts
- Familiarity with large, complex relational databases
- Python/Scala/Spark for ingestion and validation
- Git-based workflow; basic CI/CD (Fabric deployment pipelines or Azure DevOps)
Benefits
- 5 day work week
- 100% remote setup with flexible work culture and international exposure
- Opportunity to work on mission-critical healthcare projects impacting providers and patients globally

Note: The shift hours for this job are from 4PM- 1AM IST
About The Role:
We are seeking a highly skilled and experienced QA Automation Engineer with over 5 years of experience in both automation and manual testing. The ideal candidate will possess strong expertise in Python, Playwright, PyTest, Pywinauto, and Java with Selenium, API testing with Rest Assured, and SQL. Experience in the mortgage domain, Azure DevOps, and desktop & web application testing is essential. The role requires working in evening shift timings (4 PM – 1 AM IST) to collaborate with global teams.
Key Responsibilities:
- Design and develop automation test scripts using Python, Playwright, PywinAuto, and PyTest.
- Design, develop, and maintain automation frameworks for desktop applications using Java with WinAppDriver and Selenium, and Python with Pywinauto.
- Understand business requirements in the mortgage domain and prepare detailed test plans, test cases, and test scenarios.
- Define automation strategy and identify test cases to automate for web, desktop, and API testing.
- Perform manual testing for desktop, web, and API applications to validate functional and non-functional requirements.
- Create and execute API automation scripts using Rest Assured for RESTful services validation.
- Perform SQL queries to validate backend data and ensure data integrity in mortgage domain application.
- Use Azure DevOps for test case management, defect tracking, CI/CD pipeline execution, and test reporting.
- Collaborate with DevOps and development teams to integrate automated tests within CI/CD pipelines.
- Proficient in version control and collaborative development using Git.
- Experience in managing test automation projects and dependencies using Maven.
- Work closely with developers, BAs, and product owners to clarify requirements and provide early feedback.
- Report and track defects with clear reproduction steps, logs, and screenshots until closure.
- Apply mortgage domain knowledge to test scenarios for loan origination, servicing, payments, compliance, and default modules.
- Ensure adherence to regulatory and compliance standards in mortgage-related applications.
- Perform cross-browser testing and desktop compatibility testing for client-based applications.
- Drive defect prevention by identifying gaps in requirements and suggesting improvements.
- Ensure best practices in test automation - modularization, reusability, and maintainability.
- Provide daily/weekly status reports on testing progress, defect metrics, and automation coverage.
- Maintain documentation for automation frameworks, test cases, and domain-specific scenarios.
- Experienced in working within Agile/Scrum development environments.
- Proven ability to thrive in a fast-paced environment and consistently meet deadlines with minimal supervision.
- Strong team player with excellent multitasking skills, capable of managing multiple priorities in a deadline-driven environment.
Key requirements:
- 4-8 years of experience in Quality Assurance (manual and automation).
- Strong proficiency in Python, Pywinauto, PyTest, Playwright
- Hands-on experience with Rest Assured for API automation.
- Expertise in SQL for backend testing and data validation.
- Experience in mortgage domain applications (loan origination, servicing, compliance).
- Knowledge of Azure DevOps for CI/CD, defect tracking, and test case management.
- Proficiency in testing desktop and web applications.
- Excellent collaboration and communication skills to work with cross-functional global teams.
- Willingness to work in evening shift timings (4 PM – 1 AM IST).



Job Title : Senior Technical Consultant (Polyglot)
Experience Required : 5 to 10 Years
Location : Bengaluru / Chennai (Remote Available)
Positions : 2
Notice Period : Immediate to 1 Month
Role Overview :
We seek passionate polyglot developers (Java/Python/Go) who love solving complex problems and building elegant digital products.
You’ll work closely with clients and teams, applying Agile practices to deliver impactful digital experiences..
Mandatory Skills :
Strong in Java/Python/Go (any 2), with frontend experience in React/Angular, plus knowledge of HTML, CSS, CI/CD, Unit Testing, and DevOps.
Key Skills & Requirements :
Backend (80% Focus) :
- Strong expertise in Java, Python, or Go (at least 2 backend stacks required).
- Additional exposure to Node.js, Ruby on Rails, or Rust is a plus.
- Hands-on experience in building scalable, high-performance backend systems.
Frontend (20% Focus) :
- Proficiency in React or Angular
- Solid knowledge of HTML, CSS, JavaScript
Other Must-Haves :
- Strong understanding of unit testing, CI/CD pipelines, and DevOps practices.
- Ability to write clean, testable, and maintainable code.
- Excellent communication and client-facing skills.
Roles & Responsibilities :
- Tackle technically challenging and mission-critical problems.
- Collaborate with teams to design and implement pragmatic solutions.
- Build prototypes and showcase products to clients.
- Contribute to system design and architecture discussions.
- Engage with the broader tech community through talks and conferences.
Interview Process :
- Technical Round (Online Assessment)
- Technical Round with Client (Code Pairing)
- System Design & Architecture (Build from Scratch)
✅ This is a backend-heavy polyglot developer role (80% backend, 20% frontend).
✅ The right candidate is hands-on, has multi-stack expertise, and thrives in solving complex technical challenges.

We're seeking talented developers with a genuine passion for building great software. People with good problem-solving skills can apply. We don't care about your GPA; all we need are your programming skills.

Company: I Vision Infotech
Location: Ahmedabad, Gujarat (Offline & Online)
Program Type: Paid Training with Internship & Placement Assistance
Duration: 3 Months (Internship Certificate Included)
About I Vision Training
I Vision Training (unit of I Vision Infotech) offers job-oriented training and internships meant to prepare students/freshers for real-world work in IT. programs include Python, AI/ML, Business Development Executive (BDE), HR, Python, PHP, Laravel, Flutter, Web Development, and more. it has prepared students for the IT industry through live projects, industry-aligned syllabus, and placement support. With branches in Ahmedabad, Kadi, and Mehsana, they provides both practical and online training.
About the Training Program
This Paid Python + Flask Job-Oriented Training Program is designed for students, freshers, and career changers aiming to build a strong career in Python Development and Backend Web Development using Flask. Participants will gain hands-on experience through real-time projects guided by experienced industry professionals.
What You Will Learn
Core Python Programming
- Python Basics: Data Types, Loops, Conditions, Functions
- Object-Oriented Programming (OOP)
- File Handling, Error Handling, Modules
Web Development with Flask
- Web App Development using Flask
- Template Integration (HTML/CSS)
- REST API Development
- CRUD Operations & Authentication
- Deployment on Hosting Platforms
Database Integration
- MySQL / SQLite / PostgreSQL
- ORM with Flask
Tools & Technologies
- Python 3.x, Flask
- Git & GitHub, Postman
- VS Code / PyCharm
- Cloud Deployment (Optional)
Training Highlights
- 100% Practical & Hands-on Learning
- Real-Time Projects & Assignments
- Git & Version Control Exposure
- 3-Month Internship Certificate
- Resume + GitHub Portfolio Support
- Interview Preparation & Placement Assistance
Eligibility Criteria
- BCA / MCA / B.Sc IT / M.Sc IT / Diploma / B.E / B.Tech
- No prior experience needed – basic computer knowledge required
- Strong interest in Python programming and Flask backend development
Important Notes
- Paid Training Program – Fees to be paid before batch starts
- Limited seats – First Come, First Served
- Only for serious learners aiming for a tech career

Company: I Vision Training
Location: Ahmedabad, Gujarat (Offline & Online)
Program Type: Paid Training with Internship & Placement Assistance
Duration: 3 Months (Internship Certificate Included)
About I Vision Training
I Vision Training (unit of I Vision Infotech) offers job-oriented training and internships meant to prepare students/freshers for real-world work in IT. programs include Python, AI/ML, Business Development Executive (BDE), HR, Python, PHP, Laravel, Flutter, Web Development, and more. it has prepared students for the IT industry through live projects, industry-aligned syllabus, and placement support. With branches in Ahmedabad, Kadi, and Mehsana, they provides both practical and online training..
About the Training Program
This Paid AI/ML Job-Oriented Training Program is designed for students, freshers, and career changers aiming to build a strong career in Artificial Intelligence and Machine Learning. Participants will gain hands-on experience through real-time projects guided by experienced industry professionals.
What You Will Learn
Core Python & Data Handling
- Python Basics: Data Types, Loops, Conditions, Functions
- Object-Oriented Programming (OOP)
- File Handling, Error Handling, Modules
- Libraries for AI/ML: NumPy, Pandas, Matplotlib, Seaborn
Machine Learning & AI Concepts
- Supervised & Unsupervised Learning
- Regression, Classification, Clustering
- Neural Networks & Deep Learning Basics
- Model Evaluation & Optimization
Tools & Technologies
- Python 3.x, Jupyter Notebook
- Scikit-learn, TensorFlow, Keras, PyTorch
- Git & GitHub, VS Code / PyCharm
- Cloud Deployment (Optional)
Training Highlights
- 100% Practical & Hands-on Learning
- Real-Time AI/ML Projects & Assignments
- Git & Version Control Exposure
- 3-Month Internship Certificate
- Resume + GitHub Portfolio Support
- Interview Preparation & Placement Assistance
Eligibility Criteria
- BCA / MCA / B.Sc IT / M.Sc IT / Diploma / B.E / B.Tech
- No prior experience needed – basic computer knowledge required
- Strong interest in AI, Machine Learning, and Python programming
Important Notes
- Paid Training Program – Fees to be paid before batch starts
- Limited seats – First Come, First Served
- Only for serious learners aiming for a tech career


Job Title: Data Engineering Support Engineer / Manager
Experience range:-8+ Years
Location:- Mumbai
Experience :
Knowledge, Skills and Abilities
- Python, SQL
- Familiarity with data engineering
- Experience with AWS data and analytics services or similar cloud vendor services
- Strong problem solving and communication skills
- Ablity to organise and prioritise work effectively
Key Responsibilities
- Incident and user management for data and analytics platform
- Development and maintenance of Data Quliaty framework (including anomaly detection)
- Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues
- Diagnostic tools implementation and automation of operational processes
Key Relationships
- Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment
- Support research analysts and traders with issue resolution



About US:-
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe
Experience Range: 6-10 Years
Role: Fullstack Technical Lead
Key Responsibilities:
- Develop and maintain scalable web applications using React for frontend and Python (fast API/Flask/Django) for backend.
- Work with databases such as SQL, Postgres and MongoDB to design and manage robust data structures.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Ensure the performance, quality, and responsiveness of applications.
- Identify and fix bottlenecks and bugs.
· Others: AWS, Snowflake, Azure, JIRA, CI/CD pipelines
Key Requirements:
- React: Extensive experience in building complex frontend applications.
- Must to Have: Experience with Python (FAST API/ FLASK/ DJANGO).
- Required cloud experience – AWS OR Azure
- Experience with databases like SQL Postgres and MongoDB.
- Basic understanding of Data Fabric – Good to have
- Ability to work independently and as part of a team.
- Excellent problem-solving skills and attention to detail.
What We Offer
- Professional Development and Mentorship.
- Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
- Health and Family Insurance.
- 40+ Leaves per year along with maternity & paternity leaves.
- Wellness, meditation and Counselling sessions.

Location: Krishnagiri, Tamil Nadu
Experience: Minimum 2 years
Job Type: Full-time
Preferred Candidate: Female
About the Role:
We are seeking a dedicated and enthusiastic Computer Teacher to join our academic team. The ideal candidate will have at least 2 years of teaching experience, strong communication skills, and a passion for imparting computer knowledge to students at the school/college level.
Key Responsibilities:
- Deliver computer science curriculum to students as per academic guidelines.
- Teach foundational topics such as MS Office, Internet, HTML, and basic programming (e.g., Scratch, Python, C – as applicable).
- Plan and execute interactive lessons using digital teaching tools.
- Conduct practical sessions in the computer lab.
- Assess students’ progress through assignments, tests, and projects.
- Maintain attendance, grades, and student performance records.
- Encourage students to participate in tech-based activities and competitions.
- Collaborate with school/college staff for curriculum planning and development.
- Provide basic technical support for classroom technology when needed.
Required Qualifications:
- Bachelor’s degree in Computer Science, BCA, or any relevant discipline.
- B.Ed (preferred for school teaching roles).
- Minimum 2 years of teaching experience in a school or academic institution.
- Good command of English and Tamil (or local language as required).
- Strong classroom management and communication skills.
Preferred Qualities:
- Female candidates are preferred for this role.
- Ability to adapt teaching methods based on student needs.
- Familiarity with smart classroom tools and e-learning platforms.
- Passion for education and mentoring young minds.
Working Hours:
- Monday , Tuesday, Thursday to Saturday
- Timings: 9:00 AM to 4:00 PM
Salary Range:
₹15,000 – ₹20,000/month (based on experience and qualification)



Pay: ₹70,000.00 - ₹90,000.00 per month
Job description:
Name of the College: KGiSL Institute of Technology
College Profile: The main objective of KGiSL Institute of Technology is to provide industry embedded education and to mold the students for leadership in industry, government, and educational institutions; to advance the knowledge base of the engineering professions; and to influence the future directions of engineering education and practice. The ability to connect to the future challenges and deliver industry-ready human resources is a credibility that KGISL Educational Institutions have progressively excelled at. Industry -readiness of its students is what will eventually elevate an institution to star status and its competitiveness in the job market. Choice of such an institution will depend on its proximity to industry, the relevance of its learning programme to real-time industry and the active connect that a student will have with industry professionals.
Job Title: Assistant Professor / Associate Professor
Departments:
● CSE
Qualification:
● ME/M.Tech/Ph.D(Ph.D must for Associate Professor)
Experience:
● Freshers can Apply● Experience - 8-10 Years
Key Responsibilities:
1. Teaching & Learning:
Deliver high-quality lectures and laboratory sessions in core and advanced areas of Computer Science & Engineering.
Prepare lesson plans, teaching materials, and assessment tools as per the approved curriculum.
Adopt innovative teaching methodologies, including ICT-enabled learning and outcome-based education (OBE).
2. Research & Publications:
Conduct independent and collaborative research in areas of specialization.
Publish research papers in peer-reviewed journals and present in reputed conferences.
Eligibility & Qualifications (As per AICTE/UGC Norms):
Educational Qualification: Ph.D. in Computer Science & Engineering or relevant discipline.
Experience: Minimum of 8 years teaching/research/industry experience, with at least 3 years at the level of Assistant Professor.
Research: Minimum of 7 publications in refereed journals as per UGC-CARE list and at least one Ph.D. degree awarded or ongoing under supervision.
Other Requirements:
Good academic record throughout.
Proven ability to attract research funding.
Strong communication and interpersonal skills.
Work Location: [ KGiSL Campus]
Employment Type: Full-time / Permanent
Joining time: immediately
Job Type: Full-time
Benefits:
- Health insurance
- Life insurance
- Provident Fund
Work Location: In person


Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Responsibilities • Design, develop, and maintain backend systems and RESTful APIs using Python (Django, FastAPI, or Flask)• Build real-time communication features using WebSockets, SSE, and async IO • Implement event-driven architectures using messaging systems like Kafka, RabbitMQ, Redis Streams, or NATS • Develop and maintain microservices that interact over messaging and streaming protocols • Ensure high scalability and availability of backend services • Collaborate with frontend developers, DevOps engineers, and product managers to deliver end-to-end solutions • Write clean, maintainable code with unit/integration tests • Lead technical discussions, review code, and mentor junior engineers
Requirements • 8+ years of backend development experience, with at least 8 years in Python • Strong experience with asynchronous programming in Python (e.g., asyncio, aiohttp, FastAPI) • Production experience with WebSockets and Server-Sent Events • Hands-on experience with at least one messaging system: Kafka, RabbitMQ, Redis Pub/Sub, or similar • Proficient in RESTful API design and microservices architecture • Solid experience with relational and NoSQL databases • Familiarity with Docker and container-based deployment • Strong understanding of API security, authentication, and performance optimization
Nice to Have • Experience with GraphQL or gRPC • Familiarity with stream processing frameworks (e.g., Apache Flink, Spark Streaming) • Cloud experience (AWS, GCP, Azure), particularly with managed messaging or pub/sub services • Knowledge of CI/CD and infrastructure as code • Exposure to AI engineering workflows and tools • Interest or experience in building Agentic AI systems or integrating backends with AI agents


This is a full-time on-site role for a Full Stack Developer, located in Bengaluru. The Back End Developer will be responsible for designing, developing, and maintaining the server-side of web & app applications.



Required Qualifications:
- Bachelor’s or Master’s degree in Electronics Engineering, Computer Engineering, Computer Science, or a related field.
- 5+ years of experience in developing test frameworks and automation for Android-based infotainment systems.
- Minimum 2+ years of C++ development experience with exposure to Android’s HAL.
- Proficiency in programming languages such as Java, C++, and Python.
- 1+ years of experience in Android application development.
- Strong understanding of Android Automotive System and Android Framework.
- In-depth knowledge of different Android components, including Services, Activities, Broadcast Receivers, and Content Providers.
- Strong understanding of embedded systems architecture and RTOS concepts.
- Familiarity with hardware interfaces such as I2C, SPI, UART, CAN, etc.
- Experience with version control systems (e.g., Git), CI/CD tools (e.g., Jenkins, GitLab CI), and project management tools (e.g., JIRA).
- Excellent problem-solving and debugging skills.
- Mentoring skills to support and guide less experienced team members.
- Excellent communication skills for effective interaction with stakeholders and clients.

Qualification:
- 8+ years in software testing, with 5+ years in automation testing.
- Minimum 2 years of experience managing QA teams.
- Strong expertise in test automation tools (e.g., Selenium, Cypress, Appium, JMeter).
- Hands-on experience with US healthcare standards (HIPAA, HL7, FHIR).
- Proficiency in programming languages like Java, Python, or JavaScript.
- Solid understanding of DevOps practices, CI/CD pipelines, and tools like Jenkins, GitLab, or Azure DevOps.
- Expertise in API testing using tools like Postman or RestAssured.
- Strong problem-solving and analytical abilities.
- Excellent communication and collaboration skills.
- Ability to prioritize and manage multiple projects in a fast-paced environment.
Employee Benefits:
HealthAsyst provides the following health, and wellness benefits to cover a range of physical and mental well-being for its employees.
- Bi-Annual Salary Reviews
- GMC (Group Mediclaim): Provides Insurance coverage of Rs. 3 lakhs + a corporate buffer of 2 Lakhs per family. This is a family floater policy, and the company covers all the employees, spouse, and up to two children
- Employee Wellness Program- HealthAsyst offers unlimited online doctor consultations for self and family from a range of 31 specialties for no cost to employees. And OPD consultations with GP Doctors are available in person for No Cost to employees
- GPA (Group Personal Accident): Provides insurance coverage of Rs. 20 lakhs to the employee against the risk of death/injury during the policy period sustained due to an accident
- GTL (Group Term Life): Provides life term insurance protection to employees in case of death. The coverage is one time of the employee’s CTC
- Employee Assistance Program: HealthAsyst offers complete confidential counselling services to employees & family members for mental wellbeing
- Sponsored upskills program for certifications/higher education up to 1 lakh
- Flexible Benefits Plan – covering a range of components like
- National Pension System.
- Internet/Mobile Reimbursements.
- Fuel Reimbursements.
- Professional Education Reimbursements.
- Flexible working hours
- 3 Day Hybrid Model


About HealthAsyst
HealthAsyst is a leading technology company based out of Bangalore India focusing on the US healthcare market with a product and services portfolio.
HealthAsyst IT services division offers a whole gamut of software services, helping clients effectively address their operational challenges. The services include product engineering, maintenance, quality assurance, custom-development, implementation & healthcare integration. The product division of HealthAsyst partners with leading EHR, PMS and RIS vendors to provide cutting-edge patient engagement solutions to small and large provider group in the US market.
Role and Responsibilities
- Ability to face customer as AI expert, assisting in client consultations.
- Own the solutioning process, Align AI projects with client requirements.
- Drive the development of AI and ML solutions that address business problems.
- Collaborate with development teams and solution architects to develop and integrate AI solutions.
- Monitor and evaluate the performance and impact of AI and ML solutions and ensure continuous improvement and optimization.
- Design and develop AI/ML models to solve complex business problems using supervised, unsupervised, and reinforcement learning techniques..
- Build, train, and evaluate machine learning pipelines, including data preprocessing, feature engineering, and model tuning.
- Establish and maintain best practices and standards for architecture, AI and ML models, innovation, and new technology evaluation.
- Collaborate with software developers to integrate AI capabilities into applications and workflows
- Develop APIs and microservices to serve AI models for real-time or batch inference.
- Foster a culture of innovation and collaboration within the COE, across teams and provide mentorship/guidance to the team members.
- Implement responsible AI practices, including model explainability, fairness, bias detection, and compliance with ethical standards.
- Experience in deploying AI models into production environments using tools like TensorFlow Serving, TorchServe, or container-based deployment (Docker, Kubernetes)
Qualifications
- 3+ years of experience in AI and ML projects.
- Proven track record of delivering successful AI and ML solutions that address complex business problems.
- Expertise in design, development, deployment and monitoring of AI ML solutions in production.
- Proficiency in various AI and ML techniques and tools, such as deep learning, NLP, computer vision, ML frameworks, cloud platforms, etc.
- 1+ year experience in building Generative AI applications leveraging Prompt Engineering and RAG
- Preference to candidates with experience in Agentic AI, MCP and A2A (Agent2Agent) protocol.
- Strong leadership, communication, presentation and stakeholder management skills.
- Ability to think strategically, creatively and analytically, and to translate business requirements into AI and ML solutions.
- Passion for learning and staying updated with the latest developments and trends in the field of AI and ML.
- Demonstrated commitment to ethical and socially responsible AI practices
Employee Benefits:
HealthAsyst provides the following health, and wellness benefits to cover a range of physical and mental well-being for its employees.
- Bi-Annual Salary Reviews
- Flexible working hours
- 3 days Hybrid model
- GMC (Group Mediclaim): Provides Insurance coverage of Rs. 3 lakhs + a corporate buffer of 2 Lakhs per family. This is a family floater policy, and the company covers all the employees, spouse, and up to two children
- Employee Wellness Program- HealthAsyst offers unlimited online doctor consultations for self and family from a range of 31 specialties for no cost to employees. And OPD consultations with GP Doctors are available in person for No Cost to employees
- GPA (Group Personal Accident): Provides insurance coverage of Rs. 20 lakhs to the employee against the risk of death/injury during the policy period sustained due to an accident
- GTL (Group Term Life): Provides life term insurance protection to employees in case of death. The coverage is one time of the employee’s CTC
- Employee Assistance Program: HealthAsyst offers complete confidential counselling services to employees & family members for mental wellbeing
- Sponsored upskills program: The company will sponsor up to 1 Lakh for certifications/higher education/skill upskilling.
- Flexible Benefits Plan – covering a range of components like
a. National Pension System.
b. Internet/Mobile Reimbursements.
c. Fuel Reimbursements.
d. Professional Education Reimbursements.


Backend Engineer Python / Golang / Rust
Location : Bangalore, India
Experience Required : 2-3 years minimum
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python, Golang, or Rust to join our engineering team. The ideal candidate will have hands-on experience in building and maintaining enterprise-level, scalable backend services using microservices architecture.
Key Requirements :
Technical Skills :
- Programming Expertise : Advanced proficiency in Python with strong knowledge of Django, FastAPI, or Flask, OR expertise in Golang or Rust for backend development.
- Microservices Architecture : Solid experience in designing, developing, and maintaining microservices-based systems.
- Database Management : Hands-on experience with PostgreSQL, MySQL, MongoDB, including database design and optimization.
- API Development : Strong experience in designing and implementing RESTful APIs and GraphQL services.
- Cloud Platforms : Proficiency with AWS, GCP, or Azure services for deployment and scaling.
- Containerization & Orchestration : Strong knowledge of Docker and Kubernetes for scalable deployments.
- Messaging & Caching : Experience with Redis, RabbitMQ, Apache Kafka, and caching strategies (Redis, Memcached).
- Version Control : Advanced Git workflows and team collaboration best practices.
Experience Requirements :
- Minimum 2-3 years of backend development experience.
- Proven track record of working on enterprise-level, production-grade applications.
- Strong background in microservices architecture and distributed systems.
- Experience in building scalable systems capable of handling high traffic loads.
- Familiarity with CI/CD pipelines, DevOps practices, and cloud-native deployments.
Responsibilities :
- Design, develop, and maintain robust, scalable backend services and APIs.
- Architect and implement microservices solutions ensuring modularity and resilience.
- Optimize application performance, database queries, and service scalability.
- Collaborate closely with frontend teams, product managers, and DevOps engineers.
- Implement security best practices and data protection measures.
- Write and maintain comprehensive unit and integration tests.
- Participate actively in code reviews and architectural discussions.
- Monitor, debug, and optimize system performance in production environments.
Preferred Qualifications :
- Strong understanding of software architecture patterns (event-driven, CQRS, hexagonal, etc.).
- Experience with Agile/Scrum methodologies.
- Contributions to open-source projects or personal backend projects.
- Experience with observability tools (Prometheus, Grafana, ELK, Jaeger).
About the Role
We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.
Key Responsibilities
- Design and implement end-to-end Generative AI solutions for customers on AWS.
- Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
- Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
- Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
- Provide technical guidance and mentorship to junior data scientists and engineers.
- Develop reusable components and reference architectures to accelerate delivery.
- Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.
Required Skills & Experience
- 4–8 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
- Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
- Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
- Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
- Strong problem-solving skills and ability to work independently in customer-facing roles.
- Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.
Soft Skills
- Strong verbal and written communication for effective customer engagement.
- Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
- Team-oriented with a proactive attitude and strong ownership mindset.
What We Offer
- Opportunity to work on cutting-edge Generative AI projects across industries.
- Collaborative, startup-like work environment with flexibility and ownership.
- Exposure to full-stack AI/ML project lifecycle and client-facing roles.
- Competitive compensation and learning opportunities in the AWS AI ecosystem.
About Oneture Technologies
Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.
Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.


🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time
Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?
Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?
We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.
What You’ll Do:
🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises
🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI
🔹 Develop advanced RAG pipelines and production-grade LLM solutions
🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships
🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation
🔹 Represent technical capabilities at industry conferences and client reviews
What We’re Looking For:
✔️ 7+ years of experience in AI/ML engineering with production deployment expertise
✔️ Deep expertise in agentic AI frameworks and multi-agent system design
✔️ Advanced Python programming and scalable backend service development
✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering
✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure
✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge
✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments
✨ Why Join Us:
- Drive enterprise AI transformation for global clients
- Work with a category-defining AI platform bridging agents and experts
- High-impact, customer-facing role with strategic influence
- Competitive benefits: medical, vision, dental insurance, 401(k)


🌍 We’re Hiring: Customer Facing Data Scientist (CFDS) | Remote | Full-time
Are you passionate about applied data science and enjoy partnering directly with enterprise customers to deliver measurable business impact?
Do you thrive in fast-paced, cross-functional environments and want to be the face of a cutting-edge AI platform?
We’re looking for a Customer Facing Data Scientist to design, develop, and deploy machine learning applications with our clients, helping them unlock the value of their data while building strong, trusted relationships.
What You’ll Do:
🔹 Collaborate directly with customers to understand their business challenges and design ML solutions
🔹 Manage end-to-end data science projects with a customer success mindset
🔹 Build long-term trusted relationships with enterprise stakeholders
🔹 Work across industries: Banking, Finance, Health, Retail, E-commerce, Oil & Gas, Marketing
🔹 Evangelize the platform, teach, enable, and support customers in building AI solutions
🔹 Collaborate internally with Data Science, Engineering, and Product teams to deliver robust solutions
What We’re Looking For:
✔️ 5–10 years of experience solving complex data problems using Machine Learning
✔️ Expert in ML modeling and Python coding
✔️ Excellent customer-facing communication and presentation skills
✔️ Experience in AI services or startup environments preferred
✔️ Domain expertise in Finance is a plus
✔️ Applied experience with Generative AI / LLM-based solutions is a plus
✨ Why Join Us:
- High-impact opportunity to shape a new business vertical
- Work with next-gen AI technology to solve real enterprise problems
- Backed by top-tier investors with experienced leadership
- Recognized as a Top 5 Data Science & ML platform by G2
- Comprehensive benefits: medical, vision, dental insurance, 401(k)