50+ Python Jobs in Delhi, NCR and Gurgaon | Python Job openings in Delhi, NCR and Gurgaon
Apply to 50+ Python Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


We are seeking a talented and passionate Data Engineer to join our growing data team. In this role, you will be responsible for building, maintaining, and optimizing our data pipelines and infrastructure on Google Cloud Platform (GCP). The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and a passion for turning raw data into actionable insights. You will work closely with data scientists, analysts, and other engineers to support a variety of data-driven initiatives.
Responsibilities:
• Design, develop, and maintain scalable and reliable data pipelines using Dataform or DBT.
• Build and optimize data warehousing solutions on Google BigQuery.
• Develop and manage data workflows using Apache Airflow.
• Write complex and efficient SQL queries for data extraction, transformation, and analysis.
• Develop Python-based scripts and applications for data processing and automation.
• Collaborate with data scientists and analysts to understand their data requirements and provide solutions.
• Implement data quality checks and monitoring to ensure data accuracy and consistency.
• Optimize data pipelines for performance, scalability, and cost-effectiveness.
• Contribute to the design and implementation of data infrastructure best practices.
• Troubleshoot and resolve data-related issues.
• Stay up-to-date with the latest data engineering trends and technologies, particularly within the Google Cloud ecosystem.
Qualifications:
• Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
• 3-4 years of experience in a Data Engineer role.
• Strong expertise in SQL (preferably with BigQuery SQL).
• Proficiency in Python programming for data manipulation and automation.
• Hands-on experience with Google Cloud Platform (GCP) and its data services.
• Solid understanding of data warehousing concepts and ETL/ELT methodologies.
• Experience with Dataform or DBT for data transformation and modeling.
• Experience with workflow management tools such as Apache Airflow.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
• Ability to work independently and as part of a team.
Preferred Qualifications:
• Google Cloud Professional Data Engineer certification.
• Knowledge of data modeling techniques (e.g., dimensional modeling, star schema).
• Familiarity with Agile development methodologies

Location: Delhi NCR (Hybrid)
Experience: Minimum 5 years in software development, with prior exposure to leading projects or mentoring team members
Employment Type: Full-time
Key Responsibilities:
- Lead development efforts across backend, frontend, and infrastructure in collaboration with the product team.
- Be hands-on in MERN stack and Python while mentoring junior developers.
- Design and maintain microservices and event-driven systems on AWS.
- Manage deployments and scaling on AWS ECS, Lambda, SQS, S3, SES, CloudFront, ELB.
- Build and optimize data pipelines & reporting using BigQuery.
- Set up and manage Dockerized applications with proper CI/CD pipelines.
- Implement and own monitoring & alerting systems (Prometheus, Loki, Grafana, CloudWatch).
- Ensure best practices for code quality, security, and system performance.
- Collaborate closely with product managers, designers, and testers to deliver features on time.
Required Skills & Experience:
- 5-8 years of experience in full-stack/backend engineering.
- Strong expertise in MERN stack (MongoDB, Express.js, React.js, Node.js).
- Working knowledge of Python (APIs, scripting, or data processing).
- Experience with AWS services – ECS, Lambda, SQS, S3, SES.
- Hands-on with Docker & container orchestration.
- Exposure to data warehousing/analytics with BigQuery (or similar).
- Experience in CI/CD automation.
- Familiarity with logging & monitoring tools (Prometheus, Loki, Grafana, or AWS CloudWatch).
- Ability to mentor junior developers and take ownership of projects.
Good to Have:
- Experience in SaaS product development.
- Knowledge of multi-tenant architectures.
- Familiarity with AI/RAG-based chatbots.

Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill

We are seeking a highly skilled and self-motivated Backend Java Developer with at least 2 years of hands-on experience in backend development. The ideal candidate should have a deep understanding of core Java, backend architecture, REST APIs, and should be passionate about building scalable, secure, and high-performance backend systems.
Key Responsibilities:
Design, develop, and maintain scalable backend applications using Java (preferably Java 8 or above).
Develop RESTful APIs and integrate third-party APIs and services.
Collaborate with front-end developers, DevOps, and QA teams to deliver high-quality software solutions.
Write clean, maintainable, and well-documented code.
Optimize application performance and troubleshoot issues across the development lifecycle.
Ensure best practices in code design, testing, and security.
Participate in code reviews, knowledge sharing, and team discussions.

Job Title : Python Developer - Web3 (Mandatory) & Trading Bot Creation (Optional)
Experience : 2+ Years
Location : Noida (On-site)
Working Days : 6 Days WFO (Monday to Friday - WFO & Saturday - WFH)
Job Type : Full-time
Mandatory Skills : Python, Web3 (web3.py/ethers), smart contract interaction, real-time APIs/WebSockets, Git/Docker, security handling.
Responsibilities :
- Build and optimize Web3-based applications & integrations using Python.
- Interact with smart contracts and manage on-chain/off-chain data flows.
- Ensure secure key management, scalability, and performance.
- (Optional) Develop and enhance automated trading bots & strategies.
Required Skills :
- Strong experience in Python development.
- Proficiency in Web3 (web3.py/ethers) and smart contract interaction.
- Hands-on with real-time APIs, WebSockets, Git/Docker.
- Knowledge of security handling & key management.
- (Optional) Trading bot development, CEX/DEX APIs, backtesting (pandas/numpy).



About US:-
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe
Experience Range: 6-10 Years
Role: Fullstack Technical Lead
Key Responsibilities:
- Develop and maintain scalable web applications using React for frontend and Python (fast API/Flask/Django) for backend.
- Work with databases such as SQL, Postgres and MongoDB to design and manage robust data structures.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Ensure the performance, quality, and responsiveness of applications.
- Identify and fix bottlenecks and bugs.
· Others: AWS, Snowflake, Azure, JIRA, CI/CD pipelines
Key Requirements:
- React: Extensive experience in building complex frontend applications.
- Must to Have: Experience with Python (FAST API/ FLASK/ DJANGO).
- Required cloud experience – AWS OR Azure
- Experience with databases like SQL Postgres and MongoDB.
- Basic understanding of Data Fabric – Good to have
- Ability to work independently and as part of a team.
- Excellent problem-solving skills and attention to detail.
What We Offer
- Professional Development and Mentorship.
- Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified).
- Health and Family Insurance.
- 40+ Leaves per year along with maternity & paternity leaves.
- Wellness, meditation and Counselling sessions.

Job Title: L3 SDE (Python- Django)
Location: Arjan Garh, MG Road (Delhi)
Job Type: Full-time, On site
Pay Range: RS. 30,000- 70,000
**IMMEDIATE JOINERS REQUIRED**
About Us:
Our Aim is to develop ‘More Data, More Opportunities’. We take pride in building a cutting-edge AI solutions to help financial institutions mitigate risk and generate comprehensive data. Elevate Your Business's Credibility with Timble Glance's Verification and Authentication Solutions.
Responsibilities
• Writing and testing code, debugging programs, and integrating applications with third-party web services. To be successful in this role, you should have experience using server-side logic and work well in a team. Ultimately, you’ll build highly responsive web applications that align with our client’s business needs
• Write effective, scalable code
• Develop back-end components to improve responsiveness and overall performance
• Integrate user-facing elements into applications
• Improve functionality of existing systems
• Implement security and data protection solutions
• Assess and prioritize feature requests
• Coordinate with internal teams to understand user requirements and provide technical solutions
• Creates customized applications for smaller tasks to enhance website capability based on business needs
• Builds table frames and forms and writes script within the browser to enhance site functionality
• Ensures web pages are functional across different browser types; conducts tests to verify user functionality
• Verifies compliance with accessibility standards
• Assists in resolving moderately complex production support problems
Profile Requirements
* IMMEDIATE JOINERS REQUIRED
* 2 years or more experience as a Python Developer
* Expertise in at least one Python framework required Django
* Knowledge of object-relational mapping (ORM)
* Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
* Familiarity with event-driven programming in Python
* Good understanding of the operating system and networking concepts.
* Good analytical and troubleshooting skills
* Graduation/Post Graduation in Computer Science / IT / Software Engineering
* Decent verbal and written communication skills to communicate with customers, support personnel, and management
How to apply: Drop your CV at linkedin.com/in/preeti-bisht-1633b1263/ with Current CTC, Current Notice and Expected CTC

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment
EDI Developer / Map Conversion Specialist
Role Summary:
Responsible for converting 441 existing EDI maps into the PortPro-compatible format and testing them for 147 customer configurations.
Key Responsibilities:
- Analyze existing EDI maps in Profit Tools.
- Convert, reconfigure, or rebuild maps for PortPro.
- Ensure accuracy in mapping and transformation logic.
- Unit test and debug EDI transactions.
- Support system integration and UAT phases.
Skills Required:
- Proficiency in EDI standards (X12, EDIFACT) and transaction sets.
- Hands-on experience in EDI mapping tools.
- Familiarity with both Profit Tools and PortPro data structures.
- SQL and XML/JSON data handling skills.
- Experience with scripting for automation (Python, Shell scripting preferred).
- Strong troubleshooting and debugging skills.

AccioJob is conducting a Walk-In Hiring Drive with MakunAI Global for the position of Python Engineer.
To apply, register and select your slot here: https://go.acciojob.com/cE8XQy
Required Skills: DSA, Python, Django, Fast API
Eligibility:
- Degree: All
- Branch: All
- Graduation Year: 2022, 2023, 2024, 2025
Work Details:
- Work Location: Noida (Hybrid)
- CTC: 3.2 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Skill Centers located in Noida, Greater Noida, and Delhi
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/cE8XQy

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.

Job Description: Senior Full-Stack Engineer (MERN + Python )
Location: Noida (Onsite)
Experience: 5 to 10 years
We are hiring a Senior Full-Stack Engineer with proven expertise in MERN technologies and Python backend frameworks to deliver scalable, efficient, and maintainable software solutions. You will design and build web applications and microservices, leveraging FastAPI and advanced asynchronous programming techniques to ensure high performance and reliability.
Key Responsibilities:
- Develop and maintain web applications using the MERN stack alongside Python backend microservices.
- Build efficient and scalable APIs with Python frameworks like FastAPI and Flask, utilizing AsyncIO, multithreading, and multiprocessing for optimal performance.
- Lead architecture and technical decisions spanning both MERN frontend and Python microservices backend.
- Collaborate with UX/UI designers to create intuitive and responsive user interfaces.
- Mentor junior developers and conduct code reviews to ensure adherence to best practices.
- Manage and optimize databases such as MongoDB and PostgreSQL for application and microservices needs.
- Deploy, monitor, and maintain applications and microservices on AWS cloud infrastructure (EC2, Lambda, S3, RDS).
- Implement CI/CD pipelines to automate integration and deployment processes.
- Participate in Agile development practices including sprint planning and retrospectives.
- Ensure application scalability, security, and performance across frontend and backend systems.
- Design cloud-native microservices architectures focused on high availability and fault tolerance.
Required Skills and Experience:
- Strong hands-on experience with the MERN stack: MongoDB, Express.js, React.js, Node.js.
- Proven Python backend development expertise with FastAPI and Flask.
- Deep understanding of asynchronous programming using AsyncIO, multithreading, and multiprocessing.
- Experience designing and developing microservices and RESTful/GraphQL APIs.
- Skilled in database design and optimization for MongoDB and PostgreSQL.
- Familiar with AWS services such as EC2, Lambda, S3, and RDS.
- Experience with Git, CI/CD tools, and automated testing/deployment workflows.
- Ability to lead teams, mentor developers, and make key technical decisions.
- Strong problem-solving, debugging, and communication skills.
- Comfortable working in Agile environments and collaborating cross-functionally.

Sr. Staff Engineer Role
We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development
and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital
solutions that simplify and automate tax filing, reconciliation, and compliance workflows for
businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and
competitive market, impacting how businesses meet their statutory obligations with speed,
accuracy, and confidence.
As the Sr. Staff Engineer, You’ll work closely with product, DevOps, and data teams to architect
reliable systems, drive engineering excellence, and ensure high availability across our platform.
We’re looking for a technical leader who’s not just an expert in building scalable systems, but also
passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
● Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on
technical lead).
● Drive the design and development of scalable backend services using Python/Node.js.
● Experience in Django, FastApi, Task Orchestration Systems.
● Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable
deployments.
● Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
● Collaborate cross-functionally with product managers, designers, and compliance experts
to deliver features that make tax compliance seamless for our users.
● Set and enforce engineering best practices, code quality standards, and operational
excellence.
● Stay up-to-date with industry trends and advocate for continuous improvement in
engineering processes.
● Experience in fintech, tax, or compliance industries.
● Familiarity with containerization tools like Docker and orchestration with Kubernetes.
● Background in security, observability, or compliance automation.
Requirements
● 8+ years of software engineering experience, with at least 2+ years in a leadership or
principal-level role.
● Deep expertise in Python/Node.js, including API development, performance optimization,
and testing.
● Experience in Event-driven architecture, kafka/rabbitmq like
● Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
● Solid understanding of Terraform for infrastructure as code.
● Proficiency with Jenkins or similar CI/CD tooling.
● Comfortable balancing technical leadership with hands-on coding and problem-solving.
● Strong communication skills and a collaborative mindset.

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.
Job Description : Software Testing (Only Female)
VirtuBox, the world's premier B2B Cloud-based SaaS solution, empowers businesses to forge unforgettable customer experiences that transcend screens and ignite brand loyalty i.e. VirtuBox is Transforming Customer Journeys, One Pixel at a Time.
Job Summary :
We are seeking a proactive and detail-oriented Software Tester with 1–2 years of experience in manual and/or automation testing. The ideal candidate will work closely with developers and product teams to ensure high-quality software delivery by identifying bugs, writing test cases, and executing comprehensive test cycles.
Key Responsibilities :
- Analyze software requirements and design test cases to ensure functionality and performance.
- Identify, document, and track defects using bug-tracking tools.
- Collaborate with developers and stakeholders to resolve issues and improve software quality.
- Perform functional, regression, system, and performance testing.
- Execute automated testing using tools like Selenium, JMeter, or Appium.
- Participate in agile development processes, including stand-up meetings and sprint planning.
- Prepare detailed test reports and documentation for stakeholders.
- Conduct security and usability testing to ensure compliance with industry standards.
- Manage test data to create realistic testing scenarios.
- Validate bug fixes and ensure all functionalities work correctly before release.
Skill Required :
- Soft Skills
- Strong analytical and problem-solving skills.
- Good communication and teamwork abilities.
- Attention to detail and ability to work under deadlines.
- Technical skills
- Knowledge of manual testing and automated testing tools (Selenium, JMeter, Appium, etc.).
- Understanding of SDLC (Software Development Life Cycle) and STLC (Software Testing Life Cycle).
- Familiarity with defect tracking tools (JIRA, Bugzilla, etc.).
- Basic programming knowledge (Python, Java, SQL) is a plus.
Eligibility Criteria :
- Bachelor’s degree in Computer Science, IT, or related field.
- 1–2 years of hands-on experience in software testing.
- Excellent analytical and communication skills.
- ISTQB certification is desirable but not mandatory.
- Basic knowledge of any scripting or programming language is a plus.
- Strong problem-solving and analytical skills.

We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence.
As the Sr. Staff Engineer, you’ll work closely with product, DevOps, and data teams to architect reliable systems, drive engineering excellence, and ensure high availability across our platform. We’re looking for a technical leader who’s not just an expert in building scalable systems, but also passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
- Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on technical lead).
- Drive the design and development of scalable backend services using Python.
- Experience in Django, FastAPI, Task Orchestration Systems.
- Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments.
- Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
- Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users.
- Set and enforce engineering best practices, code quality standards, and operational excellence.
- Stay up-to-date with industry trends and advocate for continuous improvement in engineering processes.
- Experience in fintech, tax, or compliance industries.
- Familiarity with containerization tools like Docker and orchestration with Kubernetes.
- Background in security, observability, or compliance automation.
Requirements
- 7+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role.
- Deep expertise in Python, including API development, performance optimization, and testing.
- Experience in Event-driven architecture, Kafka/RabbitMQ-like systems.
- Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
- Solid understanding of Terraform for infrastructure as code.
- Proficiency with Jenkins or similar CI/CD tooling.
- Comfortable balancing technical leadership with hands-on coding and problem-solving.
- Strong communication skills and a collaborative mindset.

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing


AccioJob is conducting a Walk-In Hiring Drive with IT services firm for the position of AI Engineer.
To apply, register and select your slot here: https://go.acciojob.com/283eXn
Required Skills: Python, Machine Learning, Deep Learning, Prompt Engineering
Eligibility:
Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
Branch: Electrical/Other electrical related branches, Computer Science/CSE/Other CS related branch, IT
Graduation Year: 2023, 2024, 2025
Work Details:
Work Location: Noida (Onsite)
CTC: 3 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida, Delhi & Greater Noida Centres
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
- HR Interview Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/283eXn


AccioJob is conducting a Walk-In Hiring Drive with IT services firm for the position of Full Stack Developer.
To apply, register and select your slot here: https://go.acciojob.com/qhtfYQ
Required Skills: Python, JavaScript , React JS
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: Electrical/Other electrical related branches, Computer Science/CSE/Other CS related branch, IT
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Noida (Onsite)
- CTC: 3 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida, Delhi & Greater Noida Centres
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/qhtfYQ

Key Responsibilities
- Design, develop, and maintain automated test scripts using Python, pytest, and Selenium for Salesforce and web applications.
- Create and manage test environments using Docker to ensure consistent testing conditions.
- Collaborate with developers, business analysts, and stakeholders to understand requirements and define test scenarios.
- Execute automated and manual tests, analyze results, and report defects using GitLab or other tracking tools.
- Perform regression, functional, and integration testing for Salesforce applications and customizations.
- Ensure test coverage for Salesforce features, including custom objects, workflows, and Apex code.
- Contribute to continuous integration/continuous deployment (CI/CD) pipelines in GitLab for automated testing.
- Document test cases, processes, and results to maintain a comprehensive testing repository.
- Stay updated on Salesforce updates, testing tools, and industry best practices.
Required Qualifications
- 1-3 years of experience in automation testing, preferably with exposure to Salesforce applications.
- Proficiency in Python, pytest, Selenium, Docker, and GitLab for test automation and version control.
- Understanding of software testing methodologies, including functional, regression, and integration testing.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Strong problem-solving skills and attention to detail.
- Excellent verbal and written communication skills.
- Ability to work in a collaborative, fast-paced team environment.
Preferred Qualifications
- Experience with Salesforce platform testing, including Sales Cloud, Service Cloud, or Marketing Cloud.
- Active Salesforce Trailhead profile with demonstrated learning progress (please include Trailhead profile link in application).
- Salesforce certifications (e.g., Salesforce Administrator or Platform Developer) are a plus.
- Familiarity with testing Apex code, Lightning components, or Salesforce integrations.
- Experience with Agile/Scrum methodologies.
- Knowledge of Webkul’s product ecosystem or e-commerce platforms is an advantage.

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field

Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.


About the Role
At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.
Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.
We are a remote-first team and are open to hiring exceptional candidates globally.
Core Tasks
· Build and maintain the trading engine core for execution, backtesting, and event logging.
· Develop isolated strategy execution runners to support multi-user, multi-strategy environments.
· Implement abstraction layers for brokers and market data feeds to offer a unified API experience.
· Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.
· Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.
· Design compute-optimized components for multi-asset workflows and scalable backtesting.
· Capture real-time state, performance metrics, and slippage for both live and simulated runs.
· Collaborate with infrastructure engineers to support high-availability deployments.
Top Technical Competencies
· Proficiency in distributed systems, concurrency, and system design.
· Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.
· Deep understanding of data structures and algorithms with a focus on low-latency performance.
· Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).
· Familiarity with Linux-based environments and system-level performance tuning.
Bonus Competencies
· Understanding of financial markets, asset classes, and algorithmic trading strategies.
· 3–5 years of prior Backend experience.
· Hands-on experience with backtesting frameworks or financial market simulators.
· Experience with sandboxed execution environments or paper trading platforms.
· Advanced knowledge of multithreading, memory optimization, or compiler construction.
· Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.
What We Offer
· Opportunity to shape the backend architecture of a next-gen fintech startup.
· A collaborative, technically driven culture.
· Competitive compensation with performance-based bonuses.
· Flexible working hours and a remote-friendly environment for candidates across the globe.
· Exposure to financial modeling, trading infrastructure, and real-time applications.
· Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.
Ideal Candidate
You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.


AccioJob is conducting an offline hiring drive with B2B Automation Platform for the position of SDE Trainee Python.
Link for registration: https://go.acciojob.com/6kT7Ea
Position: SDE Trainee Python – DSA, Python, Django/Flask
Eligibility Criteria:
- Degree: B.Tech / BE / MCA
- Branch: CS / IT
- Work Location: Noida
Compensation:
- CTC: ₹4 - ₹5 LPA
- Service Agreement: 2-year commitment
Note:
Candidates must be available for face-to-face interviews in Noida and should be ready to join immediately.
Evaluation Process:
Round 1: Assessment at AccioJob Noida Skill Centre
Further Rounds (for shortlisted candidates):
- Technical Interview 1
- Technical Interview 2
- Tech + Managerial Round (Face-to-Face)
Important:
Please bring your laptop for the assessment.
Link for registration: https://go.acciojob.com/6kT7Ea

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.

AccioJob is conducting an offline hiring drive in partnership with Our Partner Company to hire Junior Business/Data Analysts for an internship with a Pre-Placement Offer (PPO) opportunity.
Apply, Register and select your Slot here: https://go.acciojob.com/69d3Wd
Job Description:
- Role: Junior Business/Data Analyst (Internship + PPO)
- Work Location: Hyderabad
- Internship Stipend: 15,000 - 25,000/month
- Internship Duration: 3 months
- CTC on PPO: 5 LPA - 6 LPA
Eligibility Criteria:
- Degree: Open to all academic backgrounds
- Graduation Year: 2023, 2024, 2025
Required Skills:
- Proficiency in SQL, Excel, Power BI, and basic Python
- Strong analytical mindset and interest in solving business problems with data
Hiring Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Pune, Noida)
- 1 Assignment + 2 Technical Interviews (Virtual; In-person for Hyderabad candidates)
Note: Please bring your laptop and earphones for the test.
Register Here: https://go.acciojob.com/69d3Wd
AccioJob is organizing an exclusive offline hiring drive in collaboration with GameBerry Labs for the role of Software Development Engineer 1 (SDE 1).
To Apply, Register and select your Slot here: https://go.acciojob.com/Zq2UnA
Job Description:
- Role: SDE 1
- Work Location: Bangalore
- CTC: 10 LPA - 15 LPA
Eligibility Criteria:
- Education: B.Tech, BE, BCA, MCA, M.Tech
- Branches: Circuit Branches (CSE, ECE, IT, etc.)
- Graduation Year:
- 2024 (Minimum 9 months of experience)
- 2025 (Minimum 3-6 months of experience)
Evaluation Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Bangalore, Pune, Noida)
- Technical Interviews (2 Rounds - Virtual for most; In-person for Bangalore candidates)
Note: Carry your laptop and earphones for the assessment.
Register Here: https://go.acciojob.com/Zq2UnA



🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Infrastructure Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/kcYTAp
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Linux, Networking, One scripting language among Python, Bash, and PowerShell, OOPs, Cloud Platforms (AWS, Azure)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE Core With Cloud Certification
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/kcYTAp

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Data Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/8p9ZXN
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Python, Database(MYSQL), Big Data(Spark, Kafka)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE – AI & DS / AI & ML
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/8p9ZXN

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.


Job Description:
Position: Python Technical Architect
Major Responsibilities:
● Develop and customize solutions, including workflows, Workviews, and application integrations.
● Integrate with other enterprise applications and systems.
● Perform system upgrades and migrations to ensure optimal performance.
● Troubleshoot and resolve issues related to applications and workflows using Diagnostic console.
● Ensure data integrity and security within the system.
● Maintain documentation for system configurations, workflows, and processes.
● Stay updated on best practices, new features and industry trends.
● Hands-on in Waterfall & Agile Scrum methodology.
● Working on software issues and specifications and performing Design/Code Review(s).
● Engaging in the assignment of work to the development team resources, ensuring effective transition of knowledge, design assumptions and development expectations.
● Ability to mentor developers and lead cross-functional technical teams.
● Collaborate with stakeholders to gather requirements and translate them into technical specifications for effective workflow/Workview design.
● Assist in the training of end-users and provide support as needed
● Contributing to the organizational values by actively working with agile development teams, methodologies, and toolsets.
● Driving concise, structured, and effective communication with peers and clients.
Key Capabilities and Competencies Knowledge
● Proven experience as a Software Architect or Technical Project Manager with architectural responsibilities.
● Strong proficiency in Python and relevant frameworks (Django, Flask, FastAPI).
● Strong understanding of software development lifecycle (SDLC), agile methodologies (Scrum, Kanban) and DevOps practices.
● Expertise in Azure cloud ecosystem and architecture design patterns.
● Familiarity with Azure DevOps, CI/CD pipelines, monitoring and logging.
● Experience with RESTful APIs, microservices architecture and asynchronous processing.
● Deep understanding of insurance domain processes such as claims management, policy administration etc.
● Experience in database design and data modelling with SQL(MySQL) and NoSQL(Azure Cosmos DB).
● Knowledge of security best practices including data encryption, API security and compliance standards.
● Knowledge of SAST and DAST security tools is a plus.
● Strong documentation skill for articulating architecture decisions and technical concepts to stakeholders.
● Experience with system integration using middleware or web services.
● Server Load Balancing, Planning, configuration, maintenance and administration of the Server Systems.
● Experience with developing reusable assets such as prototypes, solution designs, documentation and other materials that contribute to department efficiency.
● Highly cognizant of the DevOps approach like ensuring basic security measures.
● Technical writing skills, strong networking, and communication style with the ability to formulate professional emails, presentations, and documents.
● Passion for technology trends in the insurance industry and emerging technology space.
Qualification and Experience
● Recognized with a Bachelor’s degree in Computer Science, Information Technology, or equivalent.
● Work experience - Overall experience 10-12 years
● Recognizable domain knowledge and awareness of basic insurance and regulatory frameworks.
● Previous experience working in the insurance industry (AINS Certification is a plus).

Job Title : Backend Developer (Node.js or Python/Django)
Experience : 2 to 5 Years
Location : Connaught Place, Delhi (Work From Office)
Job Summary :
We are looking for a skilled and motivated Backend Developer (Node.js or Python/Django) to join our in-house engineering team.
Key Responsibilities :
- Design, develop, test, and maintain robust backend systems using Node.js or Python/Django.
- Build and integrate RESTful APIs including third-party Authentication APIs (OAuth, JWT, etc.).
- Work with data stores like Redis and Elasticsearch to support caching and search features.
- Collaborate with frontend developers, product managers, and QA teams to deliver complete solutions.
- Ensure code quality, maintainability, and performance optimization.
- Write clean, scalable, and well-documented code.
- Participate in code reviews and contribute to team best practices.
Required Skills :
- 2 to 5 Years of hands-on experience in backend development.
- Proficiency in Node.js and/or Python (Django framework).
- Solid understanding and experience with Authentication APIs.
- Experience with Redis and Elasticsearch for caching and full-text search.
- Strong knowledge of REST API design and best practices.
- Experience working with relational and/or NoSQL databases.
- Must have completed at least 2 end-to-end backend projects.
Nice to Have :
- Experience with Docker or containerized environments.
- Familiarity with CI/CD pipelines and DevOps workflows.
- Exposure to cloud platforms like AWS, GCP, or Azure.

🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.


Requirement:
● Role: Fullstack Developer
● Location: Noida (Hybrid)
● Experience: 1-3 years
● Type: Full-Time
Role Description : We’re seeking a Fullstack Developer to join our fast-moving team at Velto. You’ll be responsible for building robust backend services and user-facing features using a modern tech stack. In this role, you’ll also get hands-on exposure to applied AI, contributing to the development of LLM-powered workflows, agentic systems, and custom fi ne-tuning pipelines.
Responsibilities:
● Develop and maintain backend services using Python and FastAPI
● Build interactive frontend components using React
● Work with SQL databases, design schema, and integrate data models with Python
● Integrate and build features on top of LLMs and agent frameworks (e.g., LangChain, OpenAI, HuggingFace)
● Contribute to AI fi ne-tuning pipelines, retrieval-augmented generation (RAG) setups, and contract intelligence workfl ows
● Profi ciency with unit testing libraries like jest, React testing library and pytest.
● Collaborate in agile sprints to deliver high-quality, testable, and scalable code
● Ensure end-to-end performance, security, and reliability of the stack
Required Skills:
● Proficient in Python and experienced with web frameworks like FastAPI
● Strong grasp of JavaScript and React for frontend development
● Solid understanding of SQL and relational database integration with Python
● Exposure to LLMs, vector databases, and AI-based applications (projects, internships, or coursework count)
● Familiar with Git, REST APIs, and modern software development practices
● Bachelor’s degree in Computer Science or equivalent fi eld
Nice to Have:
● Experience working with LangChain, RAG pipelines, or building agentic workfl ows
● Familiarity with containerization (Docker), basic DevOps, or cloud deployment
● Prior project or internship involving AI/ML, NLP, or SaaS products
Why Join Us?
● Work on real-world applications of AI in enterprise SaaS
● Fast-paced, early-stage startup culture with direct ownership
● Learn by doing—no layers, no red tape
● Hybrid work setup and merit-based growth


Job Title: Full Stack Engineer
Location: Delhi-NCR
Type: Full-Time
Responsibilities
Frontend:
- Develop responsive, intuitive interfaces using HTML, CSS (SASS), React, and Vanilla JS.
- Implement real-time features using sockets for dynamic, interactive user experiences.
- Collaborate with designers to ensure consistent UI/UX patterns and deliver visually compelling products.
Backend:
- Design, implement, and maintain APIs using Python (FastAPI).
- Integrate AI-driven features to enhance user experience and streamline processes.
- Ensure the code adheres to best practices in performance, scalability, and security.
- Troubleshoot and resolve production issues, minimizing downtime and improving reliability.
Database & Data Management:
- Work with PostgreSQL for relational data, ensuring optimal queries and indexing.
- Utilize ClickHouse or MongoDB where appropriate to handle specific data workloads and analytics needs.
- Contribute to building dashboards and tools for analytics and reporting.
- Leverage AI/ML concepts to derive insights from data and improve system performance.
General:
- Use Git for version control; conduct code reviews, ensure clean commit history, and maintain robust documentation.
- Collaborate with cross-functional teams to deliver features that align with business goals.
- Stay updated with industry trends, particularly in AI and emerging frameworks, and apply them to enhance our platform.
- Mentor junior engineers and contribute to continuous improvement in team processes and code quality.

We’re looking for a skilled Senior Machine Learning Engineer to help us transform the Insurtech space. You’ll build intelligent agents and models that read, reason, and act.
Insurance ops are broken. Underwriters drown in PDFs. Risk clearance is chaos. Emails go in circles. We’ve lived it – and we’re fixing it. Bound AI is building agentic AI workflows that go beyond chat. We orchestrate intelligent agents to handle policy operations end-to-end:
• Risk clearance.
• SOV ingestion.
• Loss run summarization.
• Policy issuance.
• Risk triage.
No hallucinations. No handwaving. Just real-world AI that executes – in production, at scale.
Join us to help shape the future of insurance through advanced technology!
We’re Looking For:
- Deep experience in GenAI, LLM fine-tuning, and multi-agent orchestration (LangChain, DSPy, or similar).
- 5+ years of proven experience in the field
- Strong ML/AI engineering background in both foundational modeling (NLP, transformers, RAG) and traditional ML.
- Solid Python engineering chops – you write production-ready code, not just notebooks.
- A startup mindset – curiosity, speed, and obsession with shipping things that matter.
- Bonus – Experience with insurance or document intelligence (SOVs, Loss Runs, ACORDs).
What You’ll Be Doing:
- Develop foundation-model-based pipelines to read and understand insurance documents.
- Develop GenAI agents that handle real-time decision-making and workflow orchestration, and modular, composable agent architectures that interact with humans, APIs, and other agents.
- Work on auto-adaptive workflows that optimize around data quality, context, and risk signals.

Job Title: L3 SDE (Python- Django)
Location: Arjan Garh, MG Road, Gurgaon
Job Type: Full-time, On site
Company: Timble technologies Pvt Ltd. (www.timbleglance.com)
Pay Range: 30K- 70K
**IMMEDIATE JOINERS REQUIRED**
About Us:
Our Aim is to develop ‘More Data, More Opportunities’. We take pride in building a cutting-edge AI solutions to help financial institutions mitigate risk and generate comprehensive data. Elevate Your Business's Credibility with Timble Glance's Verification and Authentication Solutions.
Responsibilities
• Writing and testing code, debugging programs, and integrating applications with third-party web services. To be successful in this role, you should have experience using server-side logic and work well in a team. Ultimately, you’ll build highly responsive web applications that align with our client’s business needs
• Write effective, scalable code
• Develop back-end components to improve responsiveness and overall performance
• Integrate user-facing elements into applications
• Improve functionality of existing systems
• Implement security and data protection solutions
• Assess and prioritize feature requests
• Coordinate with internal teams to understand user requirements and provide technical solutions
• Creates customized applications for smaller tasks to enhance website capability based on business needs
• Builds table frames and forms and writes script within the browser to enhance site functionality
• Ensures web pages are functional across different browser types; conducts tests to verify user functionality
• Verifies compliance with accessibility standards
• Assists in resolving moderately complex production support problems
Profile Requirements
* 2 years or more experience as a Python Developer
* Expertise in at least one popular Python framework required Django
* Knowledge of object-relational mapping (ORM)
* Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
* Familiarity with event-driven programming in Python
* Good understanding of the operating system and networking concepts.
* Good analytical and troubleshooting skills
* Graduation/Post Graduation in Computer Science / IT / Software Engineering
* Decent verbal and written communication skills to communicate with customers, support personnel, and management
**IMMEDIATE JOINERS REQUIRED**


JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.



Job Title : Sr. Data Scientist
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2 PM to 11 PM
Availability : Immediate
Job Description :
We are seeking a Senior Data Scientist to develop and implement machine learning models, predictive analytics, and data-driven solutions.
The role involves data analysis, dashboard development (Looker Studio), NLP, Generative AI (LLMs, Prompt Engineering), and statistical modeling.
Strong expertise in Python (Pandas, NumPy), Cloud Data Science (AWS SageMaker, Azure OpenAI), Agile (Jira, Confluence), and stakeholder collaboration is essential.
Mandatory skills : Machine Learning, Cloud Data Science (AWS SageMaker, Azure OpenAI), Python (Pandas, NumPy), Data Visualization (Looker Studio), NLP & Generative AI (LLMs, Prompt Engineering), Statistical Modeling, Agile (Jira, Confluence), and strong stakeholder communication.