50+ Python Jobs in Delhi, NCR and Gurgaon | Python Job openings in Delhi, NCR and Gurgaon
Apply to 50+ Python Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


AccioJob is conducting a Walk-In Hiring Drive with MakunAI Global for the position of Python Engineer.
To apply, register and select your slot here: https://go.acciojob.com/cE8XQy
Required Skills: DSA, Python, Django, Fast API
Eligibility:
- Degree: All
- Branch: All
- Graduation Year: 2022, 2023, 2024, 2025
Work Details:
- Work Location: Noida (Hybrid)
- CTC: 3.2 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Skill Centers located in Noida, Greater Noida, and Delhi
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/cE8XQy

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.

Job Description: Senior Full-Stack Engineer (MERN + Python )
Location: Noida (Onsite)
Experience: 5 to 10 years
We are hiring a Senior Full-Stack Engineer with proven expertise in MERN technologies and Python backend frameworks to deliver scalable, efficient, and maintainable software solutions. You will design and build web applications and microservices, leveraging FastAPI and advanced asynchronous programming techniques to ensure high performance and reliability.
Key Responsibilities:
- Develop and maintain web applications using the MERN stack alongside Python backend microservices.
- Build efficient and scalable APIs with Python frameworks like FastAPI and Flask, utilizing AsyncIO, multithreading, and multiprocessing for optimal performance.
- Lead architecture and technical decisions spanning both MERN frontend and Python microservices backend.
- Collaborate with UX/UI designers to create intuitive and responsive user interfaces.
- Mentor junior developers and conduct code reviews to ensure adherence to best practices.
- Manage and optimize databases such as MongoDB and PostgreSQL for application and microservices needs.
- Deploy, monitor, and maintain applications and microservices on AWS cloud infrastructure (EC2, Lambda, S3, RDS).
- Implement CI/CD pipelines to automate integration and deployment processes.
- Participate in Agile development practices including sprint planning and retrospectives.
- Ensure application scalability, security, and performance across frontend and backend systems.
- Design cloud-native microservices architectures focused on high availability and fault tolerance.
Required Skills and Experience:
- Strong hands-on experience with the MERN stack: MongoDB, Express.js, React.js, Node.js.
- Proven Python backend development expertise with FastAPI and Flask.
- Deep understanding of asynchronous programming using AsyncIO, multithreading, and multiprocessing.
- Experience designing and developing microservices and RESTful/GraphQL APIs.
- Skilled in database design and optimization for MongoDB and PostgreSQL.
- Familiar with AWS services such as EC2, Lambda, S3, and RDS.
- Experience with Git, CI/CD tools, and automated testing/deployment workflows.
- Ability to lead teams, mentor developers, and make key technical decisions.
- Strong problem-solving, debugging, and communication skills.
- Comfortable working in Agile environments and collaborating cross-functionally.

Sr. Staff Engineer Role
We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development
and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital
solutions that simplify and automate tax filing, reconciliation, and compliance workflows for
businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and
competitive market, impacting how businesses meet their statutory obligations with speed,
accuracy, and confidence.
As the Sr. Staff Engineer, You’ll work closely with product, DevOps, and data teams to architect
reliable systems, drive engineering excellence, and ensure high availability across our platform.
We’re looking for a technical leader who’s not just an expert in building scalable systems, but also
passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
● Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on
technical lead).
● Drive the design and development of scalable backend services using Python/Node.js.
● Experience in Django, FastApi, Task Orchestration Systems.
● Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable
deployments.
● Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
● Collaborate cross-functionally with product managers, designers, and compliance experts
to deliver features that make tax compliance seamless for our users.
● Set and enforce engineering best practices, code quality standards, and operational
excellence.
● Stay up-to-date with industry trends and advocate for continuous improvement in
engineering processes.
● Experience in fintech, tax, or compliance industries.
● Familiarity with containerization tools like Docker and orchestration with Kubernetes.
● Background in security, observability, or compliance automation.
Requirements
● 8+ years of software engineering experience, with at least 2+ years in a leadership or
principal-level role.
● Deep expertise in Python/Node.js, including API development, performance optimization,
and testing.
● Experience in Event-driven architecture, kafka/rabbitmq like
● Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
● Solid understanding of Terraform for infrastructure as code.
● Proficiency with Jenkins or similar CI/CD tooling.
● Comfortable balancing technical leadership with hands-on coding and problem-solving.
● Strong communication skills and a collaborative mindset.


Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
Key Responsibilities:
- Design and implement advanced AI/ML models and algorithms to address real-world challenges.
- Analyze large and complex datasets to derive actionable insights and train predictive models.
- Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
- Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
- Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
- Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
- Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
- Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.
Required Skills:
- Strong programming skills in Python (preferred); experience with R or Java is also valuable.
- Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
- Hands-on experience with cloud platforms like AWS, Azure, or GCP.
- Solid foundation in data structures, algorithms, statistics, and machine learning principles.
- Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
- Proven experience in deploying and maintaining AI/ML models in production environments.
- Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.
Job Description : Software Testing (Only Female)
VirtuBox, the world's premier B2B Cloud-based SaaS solution, empowers businesses to forge unforgettable customer experiences that transcend screens and ignite brand loyalty i.e. VirtuBox is Transforming Customer Journeys, One Pixel at a Time.
Job Summary :
We are seeking a proactive and detail-oriented Software Tester with 1–2 years of experience in manual and/or automation testing. The ideal candidate will work closely with developers and product teams to ensure high-quality software delivery by identifying bugs, writing test cases, and executing comprehensive test cycles.
Key Responsibilities :
- Analyze software requirements and design test cases to ensure functionality and performance.
- Identify, document, and track defects using bug-tracking tools.
- Collaborate with developers and stakeholders to resolve issues and improve software quality.
- Perform functional, regression, system, and performance testing.
- Execute automated testing using tools like Selenium, JMeter, or Appium.
- Participate in agile development processes, including stand-up meetings and sprint planning.
- Prepare detailed test reports and documentation for stakeholders.
- Conduct security and usability testing to ensure compliance with industry standards.
- Manage test data to create realistic testing scenarios.
- Validate bug fixes and ensure all functionalities work correctly before release.
Skill Required :
- Soft Skills
- Strong analytical and problem-solving skills.
- Good communication and teamwork abilities.
- Attention to detail and ability to work under deadlines.
- Technical skills
- Knowledge of manual testing and automated testing tools (Selenium, JMeter, Appium, etc.).
- Understanding of SDLC (Software Development Life Cycle) and STLC (Software Testing Life Cycle).
- Familiarity with defect tracking tools (JIRA, Bugzilla, etc.).
- Basic programming knowledge (Python, Java, SQL) is a plus.
Eligibility Criteria :
- Bachelor’s degree in Computer Science, IT, or related field.
- 1–2 years of hands-on experience in software testing.
- Excellent analytical and communication skills.
- ISTQB certification is desirable but not mandatory.
- Basic knowledge of any scripting or programming language is a plus.
- Strong problem-solving and analytical skills.

We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence.
As the Sr. Staff Engineer, you’ll work closely with product, DevOps, and data teams to architect reliable systems, drive engineering excellence, and ensure high availability across our platform. We’re looking for a technical leader who’s not just an expert in building scalable systems, but also passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
- Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on technical lead).
- Drive the design and development of scalable backend services using Python.
- Experience in Django, FastAPI, Task Orchestration Systems.
- Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments.
- Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
- Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users.
- Set and enforce engineering best practices, code quality standards, and operational excellence.
- Stay up-to-date with industry trends and advocate for continuous improvement in engineering processes.
- Experience in fintech, tax, or compliance industries.
- Familiarity with containerization tools like Docker and orchestration with Kubernetes.
- Background in security, observability, or compliance automation.
Requirements
- 7+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role.
- Deep expertise in Python, including API development, performance optimization, and testing.
- Experience in Event-driven architecture, Kafka/RabbitMQ-like systems.
- Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
- Solid understanding of Terraform for infrastructure as code.
- Proficiency with Jenkins or similar CI/CD tooling.
- Comfortable balancing technical leadership with hands-on coding and problem-solving.
- Strong communication skills and a collaborative mindset.

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing


AccioJob is conducting a Walk-In Hiring Drive with IT services firm for the position of AI Engineer.
To apply, register and select your slot here: https://go.acciojob.com/283eXn
Required Skills: Python, Machine Learning, Deep Learning, Prompt Engineering
Eligibility:
Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
Branch: Electrical/Other electrical related branches, Computer Science/CSE/Other CS related branch, IT
Graduation Year: 2023, 2024, 2025
Work Details:
Work Location: Noida (Onsite)
CTC: 3 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida, Delhi & Greater Noida Centres
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
- HR Interview Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/283eXn


AccioJob is conducting a Walk-In Hiring Drive with IT services firm for the position of Full Stack Developer.
To apply, register and select your slot here: https://go.acciojob.com/qhtfYQ
Required Skills: Python, JavaScript , React JS
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: Electrical/Other electrical related branches, Computer Science/CSE/Other CS related branch, IT
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Noida (Onsite)
- CTC: 3 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida, Delhi & Greater Noida Centres
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/qhtfYQ

Key Responsibilities
- Design, develop, and maintain automated test scripts using Python, pytest, and Selenium for Salesforce and web applications.
- Create and manage test environments using Docker to ensure consistent testing conditions.
- Collaborate with developers, business analysts, and stakeholders to understand requirements and define test scenarios.
- Execute automated and manual tests, analyze results, and report defects using GitLab or other tracking tools.
- Perform regression, functional, and integration testing for Salesforce applications and customizations.
- Ensure test coverage for Salesforce features, including custom objects, workflows, and Apex code.
- Contribute to continuous integration/continuous deployment (CI/CD) pipelines in GitLab for automated testing.
- Document test cases, processes, and results to maintain a comprehensive testing repository.
- Stay updated on Salesforce updates, testing tools, and industry best practices.
Required Qualifications
- 1-3 years of experience in automation testing, preferably with exposure to Salesforce applications.
- Proficiency in Python, pytest, Selenium, Docker, and GitLab for test automation and version control.
- Understanding of software testing methodologies, including functional, regression, and integration testing.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Strong problem-solving skills and attention to detail.
- Excellent verbal and written communication skills.
- Ability to work in a collaborative, fast-paced team environment.
Preferred Qualifications
- Experience with Salesforce platform testing, including Sales Cloud, Service Cloud, or Marketing Cloud.
- Active Salesforce Trailhead profile with demonstrated learning progress (please include Trailhead profile link in application).
- Salesforce certifications (e.g., Salesforce Administrator or Platform Developer) are a plus.
- Familiarity with testing Apex code, Lightning components, or Salesforce integrations.
- Experience with Agile/Scrum methodologies.
- Knowledge of Webkul’s product ecosystem or e-commerce platforms is an advantage.

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.


We are building an advanced, AI-driven multi-agent software system designed to revolutionize task automation and code generation. This is a futuristic AI platform capable of:
✅ Real-time self-coding based on tasks
✅ Autonomous multi-agent collaboration
✅ AI-powered decision-making
✅ Cross-platform compatibility (Desktop, Web, Mobile)
We are hiring a highly skilled **AI Engineer & Full-Stack Developer** based in India, with a strong background in AI/ML, multi-agent architecture, and scalable, production-grade software development.
### Responsibilities:
- Build and maintain a multi-agent AI system (AutoGPT, BabyAGI, MetaGPT concepts)
- Integrate large language models (GPT-4o, Claude, open-source LLMs)
- Develop full-stack components (Backend: Python, FastAPI/Flask, Frontend: React/Next.js)
- Work on real-time task execution pipelines
- Build cross-platform apps using Electron or Flutter
- Implement Redis, Vector databases, scalable APIs
- Guide the architecture of autonomous, self-coding AI systems
### Must-Have Skills:
- Python (advanced, AI applications)
- AI/ML experience, including multi-agent orchestration
- LLM integration knowledge
- Full-stack development: React or Next.js
- Redis, Vector Databases (e.g., Pinecone, FAISS)
- Real-time applications (websockets, event-driven)
- Cloud deployment (AWS, GCP)
### Good to Have:
- Experience with code-generation AI models (Codex, GPT-4o coding abilities)
- Microservices and secure system design
- Knowledge of AI for workflow automation and productivity tools
Join us to work on cutting-edge AI technology that builds the future of autonomous software.


About the Role
At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.
Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.
We are a remote-first team and are open to hiring exceptional candidates globally.
Core Tasks
· Build and maintain the trading engine core for execution, backtesting, and event logging.
· Develop isolated strategy execution runners to support multi-user, multi-strategy environments.
· Implement abstraction layers for brokers and market data feeds to offer a unified API experience.
· Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.
· Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.
· Design compute-optimized components for multi-asset workflows and scalable backtesting.
· Capture real-time state, performance metrics, and slippage for both live and simulated runs.
· Collaborate with infrastructure engineers to support high-availability deployments.
Top Technical Competencies
· Proficiency in distributed systems, concurrency, and system design.
· Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.
· Deep understanding of data structures and algorithms with a focus on low-latency performance.
· Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).
· Familiarity with Linux-based environments and system-level performance tuning.
Bonus Competencies
· Understanding of financial markets, asset classes, and algorithmic trading strategies.
· 3–5 years of prior Backend experience.
· Hands-on experience with backtesting frameworks or financial market simulators.
· Experience with sandboxed execution environments or paper trading platforms.
· Advanced knowledge of multithreading, memory optimization, or compiler construction.
· Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.
What We Offer
· Opportunity to shape the backend architecture of a next-gen fintech startup.
· A collaborative, technically driven culture.
· Competitive compensation with performance-based bonuses.
· Flexible working hours and a remote-friendly environment for candidates across the globe.
· Exposure to financial modeling, trading infrastructure, and real-time applications.
· Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.
Ideal Candidate
You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.


AccioJob is conducting an offline hiring drive with B2B Automation Platform for the position of SDE Trainee Python.
Link for registration: https://go.acciojob.com/6kT7Ea
Position: SDE Trainee Python – DSA, Python, Django/Flask
Eligibility Criteria:
- Degree: B.Tech / BE / MCA
- Branch: CS / IT
- Work Location: Noida
Compensation:
- CTC: ₹4 - ₹5 LPA
- Service Agreement: 2-year commitment
Note:
Candidates must be available for face-to-face interviews in Noida and should be ready to join immediately.
Evaluation Process:
Round 1: Assessment at AccioJob Noida Skill Centre
Further Rounds (for shortlisted candidates):
- Technical Interview 1
- Technical Interview 2
- Tech + Managerial Round (Face-to-Face)
Important:
Please bring your laptop for the assessment.
Link for registration: https://go.acciojob.com/6kT7Ea

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.

AccioJob is conducting an offline hiring drive in partnership with Our Partner Company to hire Junior Business/Data Analysts for an internship with a Pre-Placement Offer (PPO) opportunity.
Apply, Register and select your Slot here: https://go.acciojob.com/69d3Wd
Job Description:
- Role: Junior Business/Data Analyst (Internship + PPO)
- Work Location: Hyderabad
- Internship Stipend: 15,000 - 25,000/month
- Internship Duration: 3 months
- CTC on PPO: 5 LPA - 6 LPA
Eligibility Criteria:
- Degree: Open to all academic backgrounds
- Graduation Year: 2023, 2024, 2025
Required Skills:
- Proficiency in SQL, Excel, Power BI, and basic Python
- Strong analytical mindset and interest in solving business problems with data
Hiring Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Pune, Noida)
- 1 Assignment + 2 Technical Interviews (Virtual; In-person for Hyderabad candidates)
Note: Please bring your laptop and earphones for the test.
Register Here: https://go.acciojob.com/69d3Wd
AccioJob is organizing an exclusive offline hiring drive in collaboration with GameBerry Labs for the role of Software Development Engineer 1 (SDE 1).
To Apply, Register and select your Slot here: https://go.acciojob.com/Zq2UnA
Job Description:
- Role: SDE 1
- Work Location: Bangalore
- CTC: 10 LPA - 15 LPA
Eligibility Criteria:
- Education: B.Tech, BE, BCA, MCA, M.Tech
- Branches: Circuit Branches (CSE, ECE, IT, etc.)
- Graduation Year:
- 2024 (Minimum 9 months of experience)
- 2025 (Minimum 3-6 months of experience)
Evaluation Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Bangalore, Pune, Noida)
- Technical Interviews (2 Rounds - Virtual for most; In-person for Bangalore candidates)
Note: Carry your laptop and earphones for the assessment.
Register Here: https://go.acciojob.com/Zq2UnA



🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Infrastructure Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/kcYTAp
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Linux, Networking, One scripting language among Python, Bash, and PowerShell, OOPs, Cloud Platforms (AWS, Azure)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE Core With Cloud Certification
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/kcYTAp

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Data Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/8p9ZXN
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Python, Database(MYSQL), Big Data(Spark, Kafka)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE – AI & DS / AI & ML
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/8p9ZXN

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.


Job Description:
Position: Python Technical Architect
Major Responsibilities:
● Develop and customize solutions, including workflows, Workviews, and application integrations.
● Integrate with other enterprise applications and systems.
● Perform system upgrades and migrations to ensure optimal performance.
● Troubleshoot and resolve issues related to applications and workflows using Diagnostic console.
● Ensure data integrity and security within the system.
● Maintain documentation for system configurations, workflows, and processes.
● Stay updated on best practices, new features and industry trends.
● Hands-on in Waterfall & Agile Scrum methodology.
● Working on software issues and specifications and performing Design/Code Review(s).
● Engaging in the assignment of work to the development team resources, ensuring effective transition of knowledge, design assumptions and development expectations.
● Ability to mentor developers and lead cross-functional technical teams.
● Collaborate with stakeholders to gather requirements and translate them into technical specifications for effective workflow/Workview design.
● Assist in the training of end-users and provide support as needed
● Contributing to the organizational values by actively working with agile development teams, methodologies, and toolsets.
● Driving concise, structured, and effective communication with peers and clients.
Key Capabilities and Competencies Knowledge
● Proven experience as a Software Architect or Technical Project Manager with architectural responsibilities.
● Strong proficiency in Python and relevant frameworks (Django, Flask, FastAPI).
● Strong understanding of software development lifecycle (SDLC), agile methodologies (Scrum, Kanban) and DevOps practices.
● Expertise in Azure cloud ecosystem and architecture design patterns.
● Familiarity with Azure DevOps, CI/CD pipelines, monitoring and logging.
● Experience with RESTful APIs, microservices architecture and asynchronous processing.
● Deep understanding of insurance domain processes such as claims management, policy administration etc.
● Experience in database design and data modelling with SQL(MySQL) and NoSQL(Azure Cosmos DB).
● Knowledge of security best practices including data encryption, API security and compliance standards.
● Knowledge of SAST and DAST security tools is a plus.
● Strong documentation skill for articulating architecture decisions and technical concepts to stakeholders.
● Experience with system integration using middleware or web services.
● Server Load Balancing, Planning, configuration, maintenance and administration of the Server Systems.
● Experience with developing reusable assets such as prototypes, solution designs, documentation and other materials that contribute to department efficiency.
● Highly cognizant of the DevOps approach like ensuring basic security measures.
● Technical writing skills, strong networking, and communication style with the ability to formulate professional emails, presentations, and documents.
● Passion for technology trends in the insurance industry and emerging technology space.
Qualification and Experience
● Recognized with a Bachelor’s degree in Computer Science, Information Technology, or equivalent.
● Work experience - Overall experience 10-12 years
● Recognizable domain knowledge and awareness of basic insurance and regulatory frameworks.
● Previous experience working in the insurance industry (AINS Certification is a plus).

Job Title : Backend Developer (Node.js or Python/Django)
Experience : 2 to 5 Years
Location : Connaught Place, Delhi (Work From Office)
Job Summary :
We are looking for a skilled and motivated Backend Developer (Node.js or Python/Django) to join our in-house engineering team.
Key Responsibilities :
- Design, develop, test, and maintain robust backend systems using Node.js or Python/Django.
- Build and integrate RESTful APIs including third-party Authentication APIs (OAuth, JWT, etc.).
- Work with data stores like Redis and Elasticsearch to support caching and search features.
- Collaborate with frontend developers, product managers, and QA teams to deliver complete solutions.
- Ensure code quality, maintainability, and performance optimization.
- Write clean, scalable, and well-documented code.
- Participate in code reviews and contribute to team best practices.
Required Skills :
- 2 to 5 Years of hands-on experience in backend development.
- Proficiency in Node.js and/or Python (Django framework).
- Solid understanding and experience with Authentication APIs.
- Experience with Redis and Elasticsearch for caching and full-text search.
- Strong knowledge of REST API design and best practices.
- Experience working with relational and/or NoSQL databases.
- Must have completed at least 2 end-to-end backend projects.
Nice to Have :
- Experience with Docker or containerized environments.
- Familiarity with CI/CD pipelines and DevOps workflows.
- Exposure to cloud platforms like AWS, GCP, or Azure.

🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.


Requirement:
● Role: Fullstack Developer
● Location: Noida (Hybrid)
● Experience: 1-3 years
● Type: Full-Time
Role Description : We’re seeking a Fullstack Developer to join our fast-moving team at Velto. You’ll be responsible for building robust backend services and user-facing features using a modern tech stack. In this role, you’ll also get hands-on exposure to applied AI, contributing to the development of LLM-powered workflows, agentic systems, and custom fi ne-tuning pipelines.
Responsibilities:
● Develop and maintain backend services using Python and FastAPI
● Build interactive frontend components using React
● Work with SQL databases, design schema, and integrate data models with Python
● Integrate and build features on top of LLMs and agent frameworks (e.g., LangChain, OpenAI, HuggingFace)
● Contribute to AI fi ne-tuning pipelines, retrieval-augmented generation (RAG) setups, and contract intelligence workfl ows
● Profi ciency with unit testing libraries like jest, React testing library and pytest.
● Collaborate in agile sprints to deliver high-quality, testable, and scalable code
● Ensure end-to-end performance, security, and reliability of the stack
Required Skills:
● Proficient in Python and experienced with web frameworks like FastAPI
● Strong grasp of JavaScript and React for frontend development
● Solid understanding of SQL and relational database integration with Python
● Exposure to LLMs, vector databases, and AI-based applications (projects, internships, or coursework count)
● Familiar with Git, REST APIs, and modern software development practices
● Bachelor’s degree in Computer Science or equivalent fi eld
Nice to Have:
● Experience working with LangChain, RAG pipelines, or building agentic workfl ows
● Familiarity with containerization (Docker), basic DevOps, or cloud deployment
● Prior project or internship involving AI/ML, NLP, or SaaS products
Why Join Us?
● Work on real-world applications of AI in enterprise SaaS
● Fast-paced, early-stage startup culture with direct ownership
● Learn by doing—no layers, no red tape
● Hybrid work setup and merit-based growth


Job Title: Full Stack Engineer
Location: Delhi-NCR
Type: Full-Time
Responsibilities
Frontend:
- Develop responsive, intuitive interfaces using HTML, CSS (SASS), React, and Vanilla JS.
- Implement real-time features using sockets for dynamic, interactive user experiences.
- Collaborate with designers to ensure consistent UI/UX patterns and deliver visually compelling products.
Backend:
- Design, implement, and maintain APIs using Python (FastAPI).
- Integrate AI-driven features to enhance user experience and streamline processes.
- Ensure the code adheres to best practices in performance, scalability, and security.
- Troubleshoot and resolve production issues, minimizing downtime and improving reliability.
Database & Data Management:
- Work with PostgreSQL for relational data, ensuring optimal queries and indexing.
- Utilize ClickHouse or MongoDB where appropriate to handle specific data workloads and analytics needs.
- Contribute to building dashboards and tools for analytics and reporting.
- Leverage AI/ML concepts to derive insights from data and improve system performance.
General:
- Use Git for version control; conduct code reviews, ensure clean commit history, and maintain robust documentation.
- Collaborate with cross-functional teams to deliver features that align with business goals.
- Stay updated with industry trends, particularly in AI and emerging frameworks, and apply them to enhance our platform.
- Mentor junior engineers and contribute to continuous improvement in team processes and code quality.

We’re looking for a skilled Senior Machine Learning Engineer to help us transform the Insurtech space. You’ll build intelligent agents and models that read, reason, and act.
Insurance ops are broken. Underwriters drown in PDFs. Risk clearance is chaos. Emails go in circles. We’ve lived it – and we’re fixing it. Bound AI is building agentic AI workflows that go beyond chat. We orchestrate intelligent agents to handle policy operations end-to-end:
• Risk clearance.
• SOV ingestion.
• Loss run summarization.
• Policy issuance.
• Risk triage.
No hallucinations. No handwaving. Just real-world AI that executes – in production, at scale.
Join us to help shape the future of insurance through advanced technology!
We’re Looking For:
- Deep experience in GenAI, LLM fine-tuning, and multi-agent orchestration (LangChain, DSPy, or similar).
- 5+ years of proven experience in the field
- Strong ML/AI engineering background in both foundational modeling (NLP, transformers, RAG) and traditional ML.
- Solid Python engineering chops – you write production-ready code, not just notebooks.
- A startup mindset – curiosity, speed, and obsession with shipping things that matter.
- Bonus – Experience with insurance or document intelligence (SOVs, Loss Runs, ACORDs).
What You’ll Be Doing:
- Develop foundation-model-based pipelines to read and understand insurance documents.
- Develop GenAI agents that handle real-time decision-making and workflow orchestration, and modular, composable agent architectures that interact with humans, APIs, and other agents.
- Work on auto-adaptive workflows that optimize around data quality, context, and risk signals.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a Lead SDET with 8-10 years of experience to play a pivotal role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is an exciting opportunity to work on cutting-edge performance testing strategies and drive impactful initiatives across the organisation.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Review application architecture and suggest improvements to enhance scalability
- Leverage AI at appropriate layers to improve efficiency and drive positive business outcomes
- Drive performance testing initiatives across the organization and ensure seamless execution
- Automate the capturing of performance metrics and generate performance trend reports
- Research, evaluate, and conduct PoCs for new tools and solutions
- Collaborate with developers and architects to enhance frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Ensure high availability and reliability of applications and services
Requirements:
- 6-9 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimising frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
- Experience in increasing application/service availability from 99.9% (three 9s) to 99.99% or higher (four/five 9s)
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website: https://www.gohighlevel.com/
YouTube Channel: https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post: https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a SDET III with 5-6 years of experience to play a crucial role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is a great opportunity to work on cutting-edge performance testing strategies and contribute to the success of our products.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Develop test strategies based on customer behavior to ensure high-performing applications
- Automate the capturing of performance metrics and generate performance trend reports
- Collaborate with developers and architects to optimize frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Research, evaluate, and conduct PoCs for new tools and solutions
- Ensure high availability and reliability of applications and services
Requirements:
- 4-7 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimizing frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

Job Title: L3 SDE (Python- Django)
Location: Arjan Garh, MG Road, Gurgaon
Job Type: Full-time, On site
Company: Timble technologies Pvt Ltd. (www.timbleglance.com)
Pay Range: 30K- 70K
**IMMEDIATE JOINERS REQUIRED**
About Us:
Our Aim is to develop ‘More Data, More Opportunities’. We take pride in building a cutting-edge AI solutions to help financial institutions mitigate risk and generate comprehensive data. Elevate Your Business's Credibility with Timble Glance's Verification and Authentication Solutions.
Responsibilities
• Writing and testing code, debugging programs, and integrating applications with third-party web services. To be successful in this role, you should have experience using server-side logic and work well in a team. Ultimately, you’ll build highly responsive web applications that align with our client’s business needs
• Write effective, scalable code
• Develop back-end components to improve responsiveness and overall performance
• Integrate user-facing elements into applications
• Improve functionality of existing systems
• Implement security and data protection solutions
• Assess and prioritize feature requests
• Coordinate with internal teams to understand user requirements and provide technical solutions
• Creates customized applications for smaller tasks to enhance website capability based on business needs
• Builds table frames and forms and writes script within the browser to enhance site functionality
• Ensures web pages are functional across different browser types; conducts tests to verify user functionality
• Verifies compliance with accessibility standards
• Assists in resolving moderately complex production support problems
Profile Requirements
* 2 years or more experience as a Python Developer
* Expertise in at least one popular Python framework required Django
* Knowledge of object-relational mapping (ORM)
* Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
* Familiarity with event-driven programming in Python
* Good understanding of the operating system and networking concepts.
* Good analytical and troubleshooting skills
* Graduation/Post Graduation in Computer Science / IT / Software Engineering
* Decent verbal and written communication skills to communicate with customers, support personnel, and management
**IMMEDIATE JOINERS REQUIRED**


JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.


- 01+ years of experience as a Full Stack Developer using Node.js and React.js.
- A strong sense of ownership—you care about business impact, not just code.
- Experience working in a fast-paced, high-growth environment.
- Exceptional communication skills in both formal and informal settings.
- A team player with a strong work ethic, who’s in it for the long run.



Job Title : Sr. Data Scientist
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2 PM to 11 PM
Availability : Immediate
Job Description :
We are seeking a Senior Data Scientist to develop and implement machine learning models, predictive analytics, and data-driven solutions.
The role involves data analysis, dashboard development (Looker Studio), NLP, Generative AI (LLMs, Prompt Engineering), and statistical modeling.
Strong expertise in Python (Pandas, NumPy), Cloud Data Science (AWS SageMaker, Azure OpenAI), Agile (Jira, Confluence), and stakeholder collaboration is essential.
Mandatory skills : Machine Learning, Cloud Data Science (AWS SageMaker, Azure OpenAI), Python (Pandas, NumPy), Data Visualization (Looker Studio), NLP & Generative AI (LLMs, Prompt Engineering), Statistical Modeling, Agile (Jira, Confluence), and strong stakeholder communication.

Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.


Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer.
Engineering at Innovaccer
With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world.
About the Role
We at Innovaccer are looking Software Development Engineer-II (Fullstack) to build the most amazing product experience. You’ll get to work with other engineers to build delightful feature experiences to
understand and solve our customer’s pain points
A Day in the Life
● Building efficient and reusable applications and abstraction
● Identify and communicate best practices.
● Participate in the project life-cycle from pitch/prototyping through definition and design to build, integration, and delivery
● Analyse and improve the performance, scalability, stability, and security of the product
● Improve engineering standards, tooling, and processes
What You Need
● 2-5 years of experience with a start-up mentality and a high willingness to learn
● Expertise in Python/NodeJS
● Experience working in Web Development Frameworks (Express/Django or Flask)
● Experience working in teams of 3-10 people.
● Knowledge of Relational Databases
Nice to have
● Experience working in FE (JS + React)
● Experience in Cloud (AWS)
● Experience in Terraform
We offer competitive benefits to set you up for success in and outside of the work
.
Here’s What We Offer
● Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days.
● Parental Leave: Experience one of the industry's best parental leave policies to spend time with your
new addition.
● Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just
take a break? We've got you covered.
● Health Insurance: We offer health benefits and insurance to you and your family for medically related
expenses related to illness, disease, or injury.
● Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from
home. Bring your furry friends with you to the office and let your colleagues become their friends, too.
*Noida office only
● Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche
facility that puts your child's well-being first. *India offices
Where and how we work
Our Noida office is situated in a posh space, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and
collaborate effectively within our team. Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered.
Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing
employment with us. We do not guarantee job spots or engage in any financial transactions related to
employment. If you encounter any posts or requests asking for payment or personal information, we strongly
advise you to report them immediately to our HR department at px@innova. Additionally, please
exercise caution and verify the authenticity of any requests before disclosing personal and confidential
information, including bank account details
.
About Innovaccer
Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data
across systems and care settings and empowers healthcare organizations with scalable, modern applications
that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EHR-agnostic solutions
have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery
transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life
sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people
and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated
Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by
Black Book. For more information, please visit innovaccer.com.
Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.