Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
KJBN labs

at KJBN labs

2 candid answers
sakthi ganesh
Posted by sakthi ganesh
Bengaluru (Bangalore)
3 - 6 yrs
₹6L - ₹11L / yr
skill iconPython
skill iconPostgreSQL
MySQL
skill iconDjango
skill iconAmazon Web Services (AWS)
+3 more

Senior Software Engineer - Backend


A Senior Software Backend Engineer is responsible for designing, building, and maintaining the server-side

logic and infrastructure of web applications or software systems. They typically work closely with frontend

engineers, DevOps teams, and other stakeholders to ensure that the back-end services perform optimally and

meet business requirements. Below is an outline of a typical Senior Backend Developer job profile:


Key Responsibilities:

1. System Architecture & Design:

- Design scalable, high-performance backend services and APIs.

- Participate in the planning, design, and development of new features.

- Ensure that systems are designed with fault tolerance, security, and scalability in mind.

2. Development & Implementation:

- Write clean, maintainable, and efficient code.

- Implement server-side logic, databases, and data storage solutions.

- Work with technologies like REST, GraphQL, and other backend communication methods.

- Design and optimize database schemas, queries, and indexes.

3. Performance Optimization:

- Diagnose and fix performance bottlenecks.

- Optimize backend processes and database queries for speed and efficiency.

- Implement caching strategies and load balancing.

4. Security:

- Ensure the security of the backend systems by implementing secure coding practices.

- Protect against common security threats such as SQL injection, cross-site scripting (XSS), and others.

5. Collaboration & Leadership:

- Collaborate with frontend teams, product managers, and DevOps engineers.

- Mentor junior developers and guide them in best practices.

- Participate in code reviews and ensure that the development team follows consistent coding standards.

6. Testing & Debugging:

- Develop and run unit, integration, and performance tests to ensure code quality.

- Troubleshoot, debug, and upgrade existing systems.

7. Monitoring & Maintenance:

- Monitor system performance and take preventive measures to ensure uptime and reliability.

- Maintain technical documentation for reference and reporting.

- Stay updated on emerging technologies and incorporate them into the backend tech stack.


Required Skills:

1. Programming Languages:

- Expertise in one or more backend programming languages in the list Python, Java, Go, Rust.

2. Database Management:

- Strong understanding of both relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g.,

MongoDB, Redis).

- Knowledge of data modeling, query optimization, and database scaling strategies.

3. API Design & Development:

- Proficiency in designing and implementing RESTful and GraphQL APIs.

- Experience with microservices architecture.

- Good understanding of containers

4. Cloud & DevOps:

- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.

- Understanding of DevOps principles, CI/CD pipelines, containerization (Docker), and orchestration

(Kubernetes).

5. Version Control:

- Proficiency with Git and branching strategies.

6. Testing & Debugging Tools:

- Familiarity with testing frameworks, debugging tools, and performance profiling.

7. Soft Skills:

- Strong problem-solving skills.

- Excellent communication and teamwork abilities.

- Leadership and mentorship qualities.


Qualifications:

- Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related field.

- 5+ years of experience in backend development or software engineering.

- Proven experience with system design, architecture, and high-scale application development.


Preferred Qualifications:

- Experience with distributed systems, event-driven architectures, and asynchronous processing.

- Familiarity with message queues (e.g., RabbitMQ, Kafka) and caching layers (e.g., Redis, Memcached).

- Knowledge of infrastructure as code (IaC) tools like Terraform or Ansible.


Tools & Technologies:

- Languages: Python, Java, Golang, Rust.

- Databases: PostgreSQL, MySQL, MongoDB, Redis, Cassandra.

- Frameworks: Django, Flask, Spring Boot, Go Micro.

- Cloud Providers: AWS, Azure, Google Cloud.

- Containerization: Docker, Kubernetes.

- CI/CD: Jenkins, GitLab CI, CircleCI.

This job profile will vary depending on the company and industry, but the core principles of designing,

developing, and maintaining back-end systems remain the same.

Read more
eazeebox

at eazeebox

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
2yrs+
Upto ₹15L / yr (Varies
)
skill iconPython
skill iconReact Native
SQL
NOSQL Databases
skill iconAmazon Web Services (AWS)

About Eazeebox

Eazeebox is India’s first specialized B2B platform for home electrical goods. We simplify supply chain logistics and empower electrical retailers through our one-stop digital platform — offering access to 100+ brands across 15+ categories, no MOQs, flexible credit options, and 4-hour delivery. We’re on a mission to bring technological inclusion to India's massive electrical retail industry.


Role Overview

We’re looking for a hands-on Full Stack Engineer who can build scalable backend systems using Python and mobile applications using React Native. You’ll work directly with the founder and a lean engineering team to architect and deliver core modules across our Quick Commerce stack – including retailer apps, driver apps, order management systems, and more.


What You’ll Do

  • Develop and maintain backend services using Python
  • Build and ship high-performance React Native apps for Android and iOS
  • Collaborate on API design, microservices, and systems integration
  • Ensure performance, reliability, and scalability across the stack
  • Contribute to decisions on re-engineering, tech stack, and infra setup
  • Work closely with the founder and product team to own end-to-end delivery
  • Participate in collaborative working sessions and pair programming when needed


What We’re Looking For

  • Strong proficiency in Python for backend development
  • Experience building mobile apps with React Native
  • Solid understanding of microservices architecture, API layers, and shared data models
  • Familiarity with AWS or equivalent cloud platforms
  • Exposure to Docker, Kubernetes, and CI/CD pipelines
  • Ability to thrive in a fast-paced, high-ownership environment


Good-to-Have (Bonus Points)

  • Experience working in Quick Commerce, logistics, or consumer apps
  • Knowledge of PIM (Product Information Management) systems
  • Understanding of key commerce algorithms (search, ranking, filtering, order management)
  • Ability to use AI-assisted coding tools to speed up development


Why Join Us

  • Build from scratch, not maintain legacy
  • Work directly with the founder and influence tech decisions
  • Shape meaningful digital infrastructure for a $35B+ industry
  • Backed by revenue – 3 years of market traction and growing fast
Read more
Peliqan

at Peliqan

3 recruiters
Bharath Kumar
Posted by Bharath Kumar
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹12L / yr
skill iconPython
SQL
API


About the Role


We are looking for a Python Developer with expertise in data synchronization (ETL & Reverse ETL), automation workflows, AI functionality, and connectivity to work directly with a customer in Peliqan. In this role, you will be responsible for building seamless integrations, enabling AI-driven functionality, and ensuring data flows smoothly across various systems.

Key Responsibilities

  • Build and maintain data sync pipelines (ETL & Reverse ETL) to ensure seamless data transfer between platforms.
  • Develop automation workflows to streamline processes and improve operational efficiency.
  • Implement AI-driven functionality, including AI-powered analytics, automation, and decision-making capabilities.
  • Build and enhance connectivity between different data sources, APIs, and enterprise applications.
  • Work closely with the customer to understand their technical needs and design tailored solutions in Peliqan.
  • Optimize performance of data integrations and troubleshoot issues as they arise.
  • Ensure security and compliance in data handling and integrations.

Requirements

  • Strong experience in Python and related libraries for data processing & automation.
  • Expertise in ETL, Reverse ETL, and workflow automation tools.
  • Experience working with APIs, data connectors, and integrations across various platforms.
  • Familiarity with AI & machine learning concepts and their practical application in automation.
  • Hands-on experience with Peliqan or similar integration/data automation platforms is a plus.
  • Strong problem-solving skills and the ability to work directly with customers to define and implement solutions.
  • Excellent communication and collaboration skills.

Preferred Qualifications

  • Experience in SQL, NoSQL databases, and cloud platforms (AWS, GCP, Azure).
  • Knowledge of data governance, security best practices, and performance optimization.
  • Prior experience in customer-facing engineering roles.

If you’re a Python & Integration Engineer who loves working on cutting-edge AI, automation, and data connectivity projects, we’d love to hear from you


Read more
VRT Management Group
Archana Chakali
Posted by Archana Chakali
Hyderabad
0 - 3 yrs
₹2L - ₹7L / yr
skill iconHTML/CSS
skill iconJavascript
User Interface (UI) Design
User Experience (UX) Design
skill iconNodeJS (Node.js)
+2 more

 

Job Title: Full Stack Developer with Design Expertise

Location: Santosh Nagar, Hyderabad, Telangana (On-site)

Employment Type: Full-Time

Company: VRT Management Group

 

About Us:

At VRT Management Group, we are a dynamic entrepreneurial consulting firm helping SMBs across the USA transform their people, processes, and strategies. As we expand our digital capabilities, we are seeking a skilled and driven Full Stack Developer to join our team full-time and take ownership of our web development and automation needs.

 

Key Responsibilities:

  • Website and Landing Page Hosting: Build, host, and maintain dynamic websites and high-converting landing pages that align with VRT’s brand identity and business objectives.
  • UI/UX Design: Design and implement user-friendly interfaces that ensure seamless navigation and deliver an exceptional user experience across all digital platforms.
  • Internal Tools Development: Design and develop intuitive, scalable internal tools to support various departments, improve operational workflows, and enhance cross-team productivity.
  • Automation Processes: Develop and integrate automation workflows to streamline business operations, enhancing productivity and efficiency.
  • Cross-Functional Collaboration: Work closely with marketing, design, and content teams to ensure seamless integration and performance of digital platforms.

 

Qualifications and Skills:

  • Proven experience as a Full Stack Developer, with a strong portfolio of web development projects.
  • Proficiency in front-end technologies (HTML, CSS, JavaScript, etc.) and back-end frameworks (Node.js, Python, PHP, etc.).
  • Hands-on experience with cloud hosting platforms, database management.
  • Familiarity with building and maintaining LMS platforms is a plus.
  • Strong problem-solving skills and the ability to work in a collaborative, fast-paced environment.
  • Bachelor’s degree in Computer Science, Information Technology, or a related field (preferred).

 

What We Offer:

  • A vibrant workplace where your contributions directly impact business success.
  • Opportunities to innovate and implement cutting-edge technologies.
  • The chance to grow with a company that values continuous learning and professional development.
Read more
Brudite Private Limited
Jaipur
0 - 2 yrs
₹4.5L - ₹6L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconDocker
+2 more

Brudite is an IT Training and Services company shaping the future of technology with Fortune 500 clients. We specialize in empowering young engineers to achieve their dreams through cutting-edge training, innovative products, and comprehensive services.

Proudly registered with iStart Rajasthan and Startup India, we are supported by industry leaders like NVIDIA and AWS.


Roles and Responsibilities - 


  • A can-do attitude to new challenges.
  • Strong understanding of computer science fundamentals, including operating systems, Databases, and Networking.
  • Knowledge of Python or any other programming language.
  • Basic knowledge of Cloud Computing(AWS/Azure/GCP) will be a Plus.
  • Basic Knowledge of Any Front-end Framework will be a Plus.
  • We operate in a fast-paced, startup-like environment, so the ability to work in a dynamic, agile environment is essential.
  • Strong written and verbal communication skills are essential for this role. You'll need to communicate with clients, team members, and stakeholders.
  • Ability to learn and adapt to new technology trends and a curiosity to learn are essential


Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
11 - 20 yrs
₹50L - ₹60L / yr
Generative AI
skill iconPython
TensorFlow
Google Cloud Platform (GCP)
POC

Job Title: AI Solutioning Architect – Healthcare IT

Role Summary:

The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).

Key Responsibilities:

  • Architect scalable AI solutions from data ingestion to deployment.
  • Align AI initiatives with business objectives and regulatory requirements (HIPAA).
  • Collaborate with cross-functional teams to deliver AI projects.
  • Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
  • Mentor technical teams and ensure best practices in MLOps.
  • Communicate complex concepts to diverse stakeholders.

Qualifications:

  • Bachelor’s/Master’s in Computer Science or related field.
  • 12+ years in software development/architecture with strong AI/ML focus.
  • Experience in healthcare IT and compliance (HIPAA).
  • Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
  • Hands-on with GCP (preferred) or other cloud platforms.
  • Strong leadership, problem-solving, and communication skills.


Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Mumbai, Hyderabad, Bengaluru (Bangalore), Chennai
5 - 10 yrs
₹6L - ₹25L / yr
skill iconPython
skill iconDjango
NumPy
skill iconFlask
pandas
+1 more

Python Developer Job Description

A Python Developer is responsible for designing, developing, and deploying software applications using the Python programming language. Here's a brief overview:


Key Responsibilities

- Software Development: Develop high-quality software applications using Python.

- Problem-Solving: Solve complex problems using Python programming language.

- Code Maintenance: Maintain and update existing codebases to ensure they remain efficient and scalable.

- Collaboration: Collaborate with cross-functional teams to identify and prioritize project requirements.

- Testing and Debugging: Write unit tests and debug applications to ensure high-quality code.


Technical Skills

- Python: Strong understanding of Python programming language and its ecosystem.

- Programming Fundamentals: Knowledge of programming fundamentals, including data structures, algorithms, and object-oriented programming.

- Frameworks and Libraries: Familiarity with popular Python frameworks and libraries, such as Django, Flask, or Pandas.

- Database Management: Understanding of database management systems, including relational databases and NoSQL databases.

- Version Control: Knowledge of version control systems, including Git.


Read more
Ceryneian Partners LLC
Mridu Srivastava
Posted by Mridu Srivastava
Remote, Noida
0 - 4 yrs
₹12L - ₹28L / yr
svelte
skill iconC++
Erlang
skill iconRust
skill iconPython
+2 more

About the Role

At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.

Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.

We are a remote-first team and are open to hiring exceptional candidates globally.

Core Tasks

·      Build and maintain the trading engine core for execution, backtesting, and event logging.

·      Develop isolated strategy execution runners to support multi-user, multi-strategy environments.

·      Implement abstraction layers for brokers and market data feeds to offer a unified API experience.

·      Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.

·      Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.

·      Design compute-optimized components for multi-asset workflows and scalable backtesting.

·      Capture real-time state, performance metrics, and slippage for both live and simulated runs.

·      Collaborate with infrastructure engineers to support high-availability deployments.

Top Technical Competencies

·      Proficiency in distributed systems, concurrency, and system design.

·      Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.

·      Deep understanding of data structures and algorithms with a focus on low-latency performance.

·      Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).

·      Familiarity with Linux-based environments and system-level performance tuning.

 

Bonus Competencies

·      Understanding of financial markets, asset classes, and algorithmic trading strategies.

·      3–5 years of prior Backend experience.

·      Hands-on experience with backtesting frameworks or financial market simulators.

·      Experience with sandboxed execution environments or paper trading platforms.

·      Advanced knowledge of multithreading, memory optimization, or compiler construction.

·      Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.

What We Offer

·      Opportunity to shape the backend architecture of a next-gen fintech startup.

·      A collaborative, technically driven culture.

·      Competitive compensation with performance-based bonuses.

·      Flexible working hours and a remote-friendly environment for candidates across the globe.

·      Exposure to financial modeling, trading infrastructure, and real-time applications.

·      Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.

Ideal Candidate

You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.

 


Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Chennai, Kochi (Cochin), Bengaluru (Bangalore), Kolkata, Thiruvananthapuram
4 - 8 yrs
₹5L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
MLOps

Machine Learning (ML) / MLOps Engineer Job Description

An ML/MLOps Engineer is responsible for designing, developing, and deploying machine learning models and pipelines. Here's a brief overview:


Key Responsibilities

- Model Development: Design and develop machine learning models using various algorithms and techniques.

- MLOps: Implement and manage machine learning pipelines, including data preparation, model training, and deployment.

- Model Deployment: Deploy machine learning models to production environments, ensuring scalability and reliability.

- Model Monitoring: Monitor model performance and retrain models as needed to maintain accuracy and relevance.

- Collaboration: Collaborate with cross-functional teams, including data scientists, product managers, and engineers.


Technical Skills

- Machine Learning: Strong understanding of machine learning concepts, including supervised and unsupervised learning, deep learning, and reinforcement learning.

- Programming: Proficiency in programming languages like Python, R, or Julia.

- ML Frameworks: Experience with machine learning frameworks like TensorFlow, PyTorch, or scikit-learn.

- MLOps Tools: Familiarity with MLOps tools like TensorFlow Extended (TFX), MLflow, or Kubeflow.

- Cloud Platforms: Experience with cloud platforms like AWS, Azure, or GCP.

Read more
Coimbatore, Bengaluru (Bangalore), Mumbai
1 - 4 yrs
₹3.4L - ₹5L / yr
skill iconPython
skill iconJavascript
skill iconJava
skill iconHTML/CSS
Big Data
+2 more

The Assistant Professor in CSE will teach undergraduate and graduate courses, conduct independent and collaborative research, mentor students, and contribute to departmental and institutional service.

Read more
KGiSL Educational Institutions

at KGiSL Educational Institutions

2 candid answers
Nazrin MN
Posted by Nazrin MN
Coimbatore, Tirupur
1 - 5 yrs
₹1.5L - ₹3.5L / yr
skill iconJava
skill iconC
skill iconC++
SQL
skill iconPython
+1 more

We are seeking a motivated and skilled Technical Trainer to deliver effective training programs on technical subjects to students or employees, ensuring clear understanding and practical skill development. The trainer will design training content, conduct engaging sessions, and evaluate learner progress to enhance technical competency aligned with industry standards.

Read more
LITMAS AI
Remote only
3 - 10 yrs
₹15L - ₹35L / yr
skill iconNodeJS (Node.js)
skill iconMongoDB
Large Language Models (LLM)
skill iconNextJs (Next.js)
skill iconPython
+1 more

Founding Engineer - LITMAS

About LITMAS

LITMAS is revolutionizing litigation with the first AI-powered platform built specifically for elite litigators. We're transforming how attorneys research, strategize, draft, and win cases by combining comprehensive case repositories with cutting-edge AI validation and workflow automation. We are a team incubated by experienced litigators, building the future of legal technology.

The Opportunity

We're seeking a Founding Engineer to join our core team and shape the technical foundation of LITMAS. This is a rare opportunity to build a category-defining product from the ground up, working directly with the founders to create technology that will transform the US litigation market.

As a founding engineer, you'll have significant ownership over our technical architecture, product decisions, and company culture. Your code will directly impact how thousands of attorneys practice law.

What You'll Do

  • Architect and build core platform features using Python, Node.js, Next.js, React, and MongoDB
  • Design and implement production-grade LLM systems with advanced tool usage, RAG pipelines, and agent architectures
  • Build AI workflows that combine multiple tools for legal research, validation, and document analysis
  • Create scalable RAG infrastructure to handle thousands of legal documents with high accuracy
  • Implement AI tool chains to provide agents tool inputs
  • Design intuitive interfaces that make complex legal workflows simple and powerful
  • Own end-to-end features from conception through deployment and iteration
  • Establish engineering best practices for AI systems including evaluation, monitoring, and safety
  • Collaborate directly with founders on product strategy and technical roadmap

The Ideal Candidate

You're not just an AI engineer, you're someone who understands how to build reliable, production-grade AI systems that users can trust. You've wrestled with RAG accuracy, tool reliability, and LLM hallucinations in production. You know the difference between a demo and a system that handles real-world complexity. You're excited about applying AI to transform how legal professional’s work.


What We're Looking For

Must-Haves

  • Deployed production-grade LLM applications with demonstrable experience in:
  • Tool usage and function calling
  • RAG (Retrieval-Augmented Generation) implementation at scale
  • Agent architectures and multi-step reasoning
  • Prompt engineering and optimization
  • Knowledge of multiple LLM providers (OpenAI, Anthropic, Cohere, open-source models)
  • Background in building AI evaluation and monitoring systems
  • Experience with document processing and OCR technologies
  • 3+ years of production experience with Node.js, Python, Next.js, and React
  • Strong MongoDB expertise including schema design and optimization
  • Experience with vector databases (Pinecone, Weaviate, Qdrant, or similar)
  • Full-stack mindset with ability to own features from database to UI
  • Track record of shipping complex web applications at scale
  • Deep understanding of LLM limitations, hallucination prevention, and validation techniques

Tech Stack

  • Backend: Node.js, Express, MongoDB
  • Frontend: Next.js, React, TypeScript, Modern CSS
  • AI/ML: LangChain/LlamaIndex, OpenAI/Anthropic APIs, vector databases, custom AI tools
  • Additional: Document processing, search infrastructure, real-time collaboration

What We Offer

  • Significant equity stake true ownership in the company you're building
  • Competitive compensation commensurate with experience
  • Direct impact your decisions shape the product and company
  • Learning opportunity work with cutting-edge AI and legal technology
  • Flexible work remote-first with a global team
  • AI resources access to latest models and compute resources

Interview Process

One more thing: Our process includes deep technical interviews and fit conversations. As part of the evaluation, there will be an extensive take-home test that should expect to take at least 4-5 hours depending on your skill level. This allows us to see how you approach real problems similar to what you'll encounter at LITMAS.

Read more
Hyderabad, Bengaluru (Bangalore), Mumbai, Delhi, Pune, Chennai
0 - 1 yrs
₹10L - ₹20L / yr
skill iconPython
Object Oriented Programming (OOPs)
skill iconJavascript
skill iconJava
Data Structures
+1 more


About NxtWave


NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.

Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.

Know more:

🌐 NxtWave | NIAT

About the Role

As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.


Key Responsibilities

  • Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
  • Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
  • Mentor students in academic, career, and project development goals.
  • Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
  • Drive research-led content development, and contribute to innovation in teaching methodologies.
  • Support capstone projects, hackathons, and collaborative research opportunities with industry.
  • Foster a high-performance learning environment in classes of 70–100 students.
  • Collaborate with cross-functional teams for continuous student development and program quality.
  • Actively participate in faculty training, peer reviews, and academic audits.


Eligibility & Requirements

  • Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
  • Strong academic and research orientation, preferably with publications or project contributions.
  • Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
  • A deep commitment to education, student success, and continuous improvement.

Must-Have Skills

  • Expertise in Python, Java, JavaScript, and advanced programming paradigms.
  • Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
  • Excellent communication, classroom delivery, and presentation skills.
  • Familiarity with academic content tools like Google Slides, Sheets, Docs.
  • Passion for educating, mentoring, and shaping future developers.

Good to Have

  • Industry experience or consulting background in software development or research-based roles.
  • Proficiency in version control systems (e.g., Git) and agile methodologies.
  • Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
  • A drive to innovate in teaching, curriculum design, and student engagement.

Why Join Us?

  • Be at the forefront of shaping India’s tech education revolution.
  • Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
  • Competitive compensation with strong growth potential.
  • Create impact at scale by mentoring hundreds of future-ready tech leaders.


Read more
 B2B Automation Platform

B2B Automation Platform

Agency job
via AccioJob by AccioJobHiring Board
Noida
0 - 1 yrs
₹4L - ₹5L / yr
DSA
skill iconPython
skill iconDjango
skill iconFlask

AccioJob is conducting an offline hiring drive with B2B Automation Platform for the position of SDE Trainee Python.


Link for registration: https://go.acciojob.com/6kT7Ea


Position: SDE Trainee Python – DSA, Python, Django/Flask


Eligibility Criteria:

  • Degree: B.Tech / BE / MCA
  • Branch: CS / IT
  • Work Location: Noida

Compensation:

  • CTC: ₹4 - ₹5 LPA
  • Service Agreement: 2-year commitment

Note:

Candidates must be available for face-to-face interviews in Noida and should be ready to join immediately.


Evaluation Process:

Round 1: Assessment at AccioJob Noida Skill Centre

Further Rounds (for shortlisted candidates):

  • Technical Interview 1
  • Technical Interview 2
  • Tech + Managerial Round (Face-to-Face)

Important:

Please bring your laptop for the assessment.


Link for registration: https://go.acciojob.com/6kT7Ea

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Indore
0 - 2 yrs
₹6L - ₹12L / yr
Blockchain
ETL
Artificial Intelligence (AI)
Generative AI
skill iconPython
+3 more

About Us

Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. 


As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.


What We Build

  • Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
  • DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
  • ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
  • High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
  • Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.


Evaluation Process

  • HR Discussion – A brief conversation to understand your motivation and alignment with the role.
  • Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
  • Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
  • Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
  • Final Interview – A concluding round to explore your background, interests, and team fit in depth.
  • Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.


Blockchain Data & ML Engineer


As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.


What You’ll Work On

  • Build and maintain ETL pipelines for ingesting and processing blockchain data.
  • Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
  • Evaluate model performance, tune hyperparameters, and document experimental results.
  • Develop monitoring tools to track model accuracy, data drift, and system health.
  • Collaborate with infrastructure and execution teams to integrate ML components into production systems.
  • Design and maintain databases and storage systems to efficiently manage large-scale datasets.


Ideal Traits

  • Strong in data structures, algorithms, and core CS fundamentals.
  • Proficiency in any programming language
  • Curiosity about how blockchain systems and crypto markets work under the hood.
  • Self-motivated, eager to experiment and learn in a dynamic environment.


Bonus Points For

  • Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
  • Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
  • Participation in hackathons or open-source contributions.


What You’ll Gain

  • Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
  • Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
  • Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
  • Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters


What We Value:

  • Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
  • Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
  • Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
  • Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.


Compensation:

  • INR 6 - 12 LPA
  • Performance Bonuses: Linked to contribution, delivery, and impact.
Read more
EaseMyTrip.com

at EaseMyTrip.com

1 recruiter
Madhu Sharma
Posted by Madhu Sharma
Gurugram
5 - 6 yrs
₹10L - ₹15L / yr
skill iconPython
Generative AI
skill iconReact.js
  • Strong hands-on experience in Generative AI / LLMs / NLP (OpenAI, LangChain, Hugging Face, etc.).
  • Proficiency in Python for AI/ML model development and backend integration.
  • Experience with React JS for building frontend applications.
  • Familiarity with REST APIs, CI/CD, and agile environments.
  • Solid understanding of data structures, algorithms, and system design.
Read more
Client based at Pune location.
Remote only
8 - 12 yrs
₹24L - ₹40L / yr
BigID
SME
Subject-matter expert
BigID Developer
skill iconPython
+4 more

Job Title: BigID Deployment Lead/ SME

Duration: 6+ Months

Exp. Level: 8-12yrs


Job Summary:

We are seeking a highly skilled and experienced BigID Deployment Lead / Subject Matter Expert (SME) to lead the implementation, configuration, and optimization of BigID's data intelligence platform. The ideal candidate will have deep expertise in data discovery, classification, privacy, and governance, and will play a pivotal role in ensuring successful deployment and integration of BigID solutions across enterprise environments.


Key Responsibilities:

Lead end-to-end deployment and configuration of BigID solutions in complex enterprise environments.

Serve as the primary SME for BigID, advising stakeholders on best practices, architecture, and integration strategies.

Collaborate with cross-functional teams including security, compliance, data governance, and IT to align BigID capabilities with business requirements.

Customize and fine-tune BigID policies, connectors, and scanning configurations to meet data privacy and compliance objectives (e.g., GDPR, CCPA, HIPAA).

Conduct workshops, training sessions, and knowledge transfers for internal teams and clients.

Troubleshoot and resolve technical issues related to BigID deployment, performance, and data discovery.

Stay current with BigID product updates, industry trends, and regulatory changes to ensure continuous improvement and compliance.

Required Qualifications:

Bachelor's or Master's degree in Computer Science, Information Technology, Cybersecurity, or a related field.

5+ years of experience in data governance, privacy, or security domains.

2+ years of hands-on experience with BigID platform deployment and configuration.

Strong understanding of data classification, metadata management, and data mapping.

Experience with cloud platforms (AWS, Azure, GCP) and integrating BigID with cloud-native services.

Familiarity with data privacy regulations (GDPR, CCPA, etc.) and risk management frameworks.

Excellent communication, documentation, and stakeholder management skills.

Preferred Qualifications:

BigID certification(s) or formal training.

Experience with scripting (Python, PowerShell) and API integrations.

Background in enterprise data architecture or data security.

  • Experience working in Agile/Scrum environments.
Read more
 engineering and technology company

engineering and technology company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
12 - 15 yrs
₹20L - ₹25L / yr
DevOps
Android.mk
Gradle
cicd
skill iconJenkins
+5 more

Job Overview

  • •   Required experience from 8 years – 12 year in Devops/System Debug.
  • •   Develop and maintain an automated infrastructure of continuous integration and deployment (CI/CD).
  • •   Have experience of creating automated CI/CD pipeline by using tools like Gitlab.
  • •   Demonstrated capability with CI/CD tools such as Jenkins, Git/Gerrit, JFrog(Artifactory, xRay, Pipelines).
  • •   Strong development expertise in Python and Linux scripting languages.
  • •   Have strong knowledge of UNIX, Linux.
  • •   Knowledge on Android Build System (Android.mk , Android.bp, gradle)
  • •   Unit testing/Integration testing and code-coverage tools.
  • •   -Have knowledge of deploying containers by using containerization tools like docker.
  • •   Excellent problem solving and debugging skills and can take ownership on the CI/CD configuration.
  • •   Eliminate variation by working with global engineering teams to define and implement common processes and configuration that work for all projects.
  • •   Maintain and update current scripts/tools to support an evolving software
  • •   Good team player and should follow agile development methodologies and ASPICE practice as part of SW development lifecycle.
  • •   Good understanding of Quality control and Test automation in Agile based Continuous Integration environment.
Read more
Remote only
0 - 1 yrs
₹5000 - ₹5500 / mo
skill iconPython
dbms
skill iconAmazon Web Services (AWS)

Description

Job Description:

Company: Springer Capital

Type: Internship (Remote, Part-Time/Full-Time)

Duration: 3–6 months

Start Date: Rolling

Compensation:


About the role:

We’re building high-performance backend systems that power our financial and ESG intelligence platforms and we want you on the team. As a Backend Engineering Intern, you’ll help us develop scalable APIs, automate data pipelines, and deploy secure cloud infrastructure. This is your chance to work alongside experienced engineers, contribute to real products, and see your code go live.


What You'll Work On:

As a Backend Engineering Intern, you’ll be shaping the systems that power financial insights.


Engineering scalable backend services in Python, Node.js, or Go


Designing and integrating RESTful APIs and microservices


Working with PostgreSQL, MongoDB, or Redis for data persistence


Deploying on AWS/GCP, using Docker, and learning Kubernetes on the fly


Automating infrastructure and shipping faster with CI/CD pipelines


Collaborating with a product-focused team that values fast iteration


What We’re Looking For:


A builder mindset – you like writing clean, efficient code that works


Strong grasp of backend languages (Python, Java, Node, etc.)


Understanding of cloud platforms and containerization basics


Basic knowledge of databases and version control


Students or self-taught engineers actively learning and building


Preferred skills:


Experience with serverless or event-driven architectures


Familiarity with DevOps tools or monitoring systems


A curious mind for AI/ML, fintech, or real-time analytics


What You’ll Get:


Real-world experience solving core backend problems


Autonomy and ownership of live features


Mentorship from engineers who’ve built at top-tier startups


A chance to grow into a full-time offer

Read more
Hypersonix Inc

at Hypersonix Inc

2 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
9yrs+
Upto ₹30L / yr (Varies
)
skill iconData Analytics
SQL
MS-Excel
skill iconPython
skill iconR Programming
+5 more

Role overview

As a Data Analyst, you should be able to propose creative solutions to develop/solve a business problem. Should be able to recommend design and develop state-of-the-art data-driven analysis using statistical; understating of advanced analytics methodologies to solve business problems & recommend insights. Form hypothesis and run experiments to gain empirical insights and validate the hypothesis. Identify and eliminate possible obstacles and identify an alternative creative solution.


Roles and Responsibilities: -

  • Identify opportunities and partner with key stakeholders to set priorities, manage expectations, facilitate change required to activate insights, and measure the impact
  • Deconstruct problems and goals to form a clear picture for hypothesis generation and use best practices around decision science approaches and technology to solve business challenges
  • Can guide team to Integrate custom analytical solutions (e.g., predictive modeling, segmentation, issue tree frameworks) to support data-driven decision-making
  • Monitors and manages project baseline to ensure activities are occurring as planned - scope, budget and schedule – manages variances
  • Anticipates problems before they occur; defines the problem or risk; identifies possible causes; works with team to identify solutions; selects and implements most appropriate solution
  • Identifies potential points of contention for missed deliverables; creates and implements strategy to mitigate shortfalls in timeline and budget
  • Develop and manage plans to address project strengths, weaknesses, opportunities and threats
  • Translate and communicate results, recommendations, and opportunities to improve data solutions to internal and external leadership with easily consumable reports and presentations.
  • Expected to act independently to deliver projects to schedule, budget and scope; support provided as required and requested, and is self-driven and motivated
  • Able to manage multiple clients, lead technical client calls and act as a bridge between product teams and client


Experience Required:

  • 9 plus years experience.  
  • Experience in design and review of new solution concepts and leading the delivery of high-impact analytics solutions and programs for global clients
  • Should be able to apply domain knowledge to functional areas like market size estimation, business growth strategy, strategic revenue management, marketing effectiveness
  • Have business acumen to manage revenues profitably and meet financial goals consistently. Able to quantify business value for clients and create win-win commercial propositions.
  • Must have the ability to adapt to changing business priorities in a fast-paced business environment
  • Should have the ability to handle structured /unstructured data and have prior experience in loading, validating, and cleaning various types of data
  • Should have a very good understanding of data structures and algorithms
  • This is a Remote (work from home) position.
  • Experience leading and working independently on projects in a fast-paced environment
  • Management skills to manage more than one large, complex projects simultaneously
  • Strong communication and interpersonal skills (includes negotiation)
  • Excellent written and verbal communication skills


Must have technical skills: -

  • IT background with experience across the systems development life cycle with experience in all project phases – plan, initiate, elaborate, design, build, test, implement.
  • Working knowledge of market-leading data analytics tools such as - Spotfire, Tableau, PowerBI, SAP HANA is desired
  • Domain experience of retails/Ecom is plus
  • Well versed with advance SQL/Excel
  • Good with any scripting language/data extraction in Python/R etc.
  • Working knowledge of project management methodology, tools and templates (includes program/project planning, schedule development, scope management and cost management)
Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Gurugram
5 - 10 yrs
₹15L - ₹22L / yr
Large Language Models (LLM)
Generative AI
skill iconPython
Langchaing
Windows Azure
+3 more


Role Title: Senior LLM Engineer - GenAI / ML (Python, Langchain)

Role Overview

We are seeking highly skilled and experienced Senior LLM Engineers with a strong background in Machine Learning and Software Engineering who have transitioned into Generative Al (GenAI) and Large Language Models (LLMs) over the past 3-4 years. This is a hands-on engineering role focused on designing, building, and deploying GenAl-based systems using state-of-the-art frameworks and tools.

The role involves active participation in architectural design, model fine-tuning, and cross-functional collaboration with business stakeholders, data teams, and engineering leaders to deliver enterprise-grade GenAl solutions.

Key Responsibilities

GenAl System Design: Architect and develop GenAI/LLM-based systems using frameworks like LangChain and Retrieval-Augmented Generation (RAG) pipelines.

·

Al Solution Delivery: Translate complex business requirements into scalable, production-ready Al solutions.

Cross-functional Collaboration: Work closely with business SMEs, product owners, and data engineering teams to align Al models with real-world use cases.

System Optimization: Contribute to code reviews, system architecture discussions, and performance tuning of deployed models.

Required Skills

• 7-12 years of total experience in ML/Software Engineering, with 3-4 years of recent experience in LLMs and Generative Al.

• Strong proficiency in Python, LangChain, and SQL.

·


Agent frameworks

Experience working with cloud platforms such as AWS, Azure, or GCP.

· Solid understanding of ML pipelines, deployment strategies, and GenAI use cases.

• Ability to work independently and collaboratively in fast-paced, cross-functional environments.

• Strong verbal and written communication skills; ability to engage effectively with technical and non-technical stakeholders.

Preferred Qualifications

· Minimum 1+ years of hands-on experience specifically in LLM/GenAl-focused implementations.

· Experience delivering ML/AI products from prototyping through to production.

• Familiarity with MLOps, CI/CD, containerization, and scalable Al model deployment.



Read more
Aeries Technology

at Aeries Technology

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
7 - 12 yrs
Upto ₹42L / yr (Varies
)
DevOps
skill iconJava
skill iconPython
Groovy
skill iconC#

This role is part of the Quickbase Center of Excellence, a global initiative operated in partnership with Aeries, and offers an exciting opportunity to work on cutting-edge DevOps technologies with strong collaboration across teams in the US, Bulgaria, and India.

Key Responsibilities

  • Build and manage CI/CD pipelines across environments
  • Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC)
  • Develop internal tools and scripts to boost developer productivity
  • Set up and maintain monitoring, alerting, and performance dashboards
  • Collaborate with cross-functional engineering teams to ensure infrastructure scalability and security
  • Contribute to the DevOps Community of Practice by sharing best practices and tools
  • Continuously evaluate and integrate new technologies and DevOps trends

Skills & Experience Required

  • Strong scripting experience: Bash, PowerShell, Python, or Groovy
  • Hands-on with containerization tools like Docker and Kubernetes
  • Proficiency in Infrastructure as Code: Terraform, CloudFormation, or Azure Templates
  • Experience with CI/CD tools such as Jenkins, TeamCity, GitHub Actions, or CircleCI
  • Exposure to Serverless computing (AWS Lambda or Google App Engine)
  • Cloud experience with AWS, GCP, or Azure
  • Solid understanding of networking concepts: DNS, DHCP, SSL, subnets
  • Experience with monitoring tools and alerting platforms
  • Basic understanding of security principles and best practices
  • Prior experience working directly with software engineering teams

Preferred Qualifications

  • Bachelor’s degree in Computer Science or related discipline
  • Strong communication skills (verbal & written)
  • Ability to work effectively in a distributed, high-performance team
  • Passion for DevOps best practices and a continuous learning mindset
  • Customer-obsessed and committed to improving engineering efficiency

Why Join Us?

  • Quickbase Center of Excellence: Purpose-built team delivering excellence from Bangalore
  • Fast-Growing Environment: Be part of a growing company with strong career advancement
  • Innovative Tech Stack: Exposure to cutting-edge tech in cloud, AI, and DevOps tooling
  • Inclusive Culture: ERGs and leadership development programs to support growth
  • Global Collaboration: Work closely with teams across the US, Bulgaria, and India

About Quickbase

Quickbase is a leading no-code platform that empowers organizations to create enterprise applications without writing code. Founded in 1999 and trusted by over 6,000 customers, Quickbase helps companies connect data, streamline workflows, and achieve real-time insights.

Learn more: https://www.quickbase.com

Read more
Cognida

at Cognida

2 candid answers
Srilatha Swarnam
Posted by Srilatha Swarnam
Hyderabad
12 - 20 yrs
₹30L - ₹60L / yr
Architecture
skill iconPython
Fullstack Developer
User Interface (UI) Design
skill iconReact.js
+1 more

About Cognida.ai:


Our Purpose is to boost your competitive advantage using AI and Analytics.

We Deliver tangible business impact with data-driven insights powered by AI. Drive revenue growth, increase profitability and improve operational efficiencies.

We Are technologists with keen business acumen - Forever curious, always on the front lines of technological advancements. Applying our latest learnings, and tools to solve your everyday business challenges.

We Believe the power of AI should not be the exclusive preserve of the few. Every business, regardless of its size or sector deserves the opportunity to harness the power of AI to make better decisions and drive business value.

We See a world where our AI and Analytics solutions democratise decision intelligence for all businesses. With Cognida.ai, our motto is ‘No enterprise left behind’.


Position: Python Fullstack Architect

Location: Hyderabad

Job Summary

We’re seeking a seasoned Python Fullstack Architect with 15+ years of experience to lead solution design, mentor teams, and drive technical excellence across projects. You'll work closely with stakeholders, contribute to architecture governance, and integrate modern technologies across the stack.

Key Responsibilities

  • Design and review Python-based fullstack solution architectures.
  • Guide development teams on best practices, modern frameworks, and cloud-native patterns.
  • Engage with clients to translate business needs into scalable technical solutions.
  • Stay current with tech trends and contribute to internal innovation initiatives.

Required Skills

  • Strong expertise in Python (Django/Flask/FastAPI) and frontend frameworks (React, Angular, etc.).
  • Cloud experience (AWS, Azure, or GCP) and DevOps/CI-CD setup.
  • Familiarity with enterprise tools: RabbitMQ, Kafka, OAuth2, PostgreSQL, MongoDB.
  • Solid understanding of microservices, API design, batch/stream processing.
  • Strong leadership, mentoring, and architectural problem-solving skills.


Read more
Product company for financial operations automation platform

Product company for financial operations automation platform

Agency job
via Esteem leadership by Suma Raju
Hyderabad
4 - 6 yrs
₹20L - ₹25L / yr
skill iconPython
skill iconJava
skill iconKubernetes
Google Cloud Platform (GCP)

Mandatory Criteria

  • Candidate must have Strong hands-on experience with Kubernetes of at least 2 years in production environments.
  • Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
  • Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
  • Candidate should have strong Backend experience.
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.


About the Role


We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.

# Experience with Kubernetes is mandatory.

 

Key Responsibilities

  • Design and develop scalable, reliable backend services and cloud-native applications.
  • Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
  • Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
  • Implement and manage CI/CD pipelines and infrastructure automation.
  • Collaborate with frontend, DevOps, and product teams in an agile environment.
  • Ensure high code quality through testing, reviews, and documentation.

 

Required Skills

  • Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
  • Expertise in at least one public cloud platform [GCP (Preferred)AWSAzure, or OCI].
  • Proficient in backend programming with PythonJava, or Kotlin (at least one is required).
  • Solid understanding of distributed systems, microservices, and cloud-native architecture.
  • Experience with containerization using Docker and Kubernetes-native deployment workflows.
  • Working knowledge of SQL and relational databases.

  

Preferred Qualifications

  • Experience working across multiple cloud platforms.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.

  

Nice to Have

  • Knowledge of NoSQL databases or event-driven/message-based architectures.
  • Experience with serverless services, managed data pipelines, or data lake platforms.
Read more
KJBN labs

at KJBN labs

2 candid answers
sakthi ganesh
Posted by sakthi ganesh
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Hadoop
Apache Kafka
Spark
redshift
skill iconPython
+9 more

Senior Data Engineer Job Description

Overview

The Senior Data Engineer will design, develop, and maintain scalable data pipelines and

infrastructure to support data-driven decision-making and advanced analytics. This role requires deep

expertise in data engineering, strong problem-solving skills, and the ability to collaborate with

cross-functional teams to deliver robust data solutions.

Key Responsibilities


Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data

pipelines to ingest, process, and transform large volumes of structured and unstructured data.

Data Architecture: Architect and maintain data storage solutions, including data lakes, data

warehouses, and databases, ensuring performance, scalability, and cost-efficiency.

Data Integration: Integrate data from diverse sources, including APIs, third-party systems,

and streaming platforms, ensuring data quality and consistency.

Performance Optimization: Monitor and optimize data systems for performance, scalability,

and cost, implementing best practices for partitioning, indexing, and caching.

Collaboration: Work closely with data scientists, analysts, and software engineers to

understand data needs and deliver solutions that enable advanced analytics, machine

learning, and reporting.

Data Governance: Implement data governance policies, ensuring compliance with data

security, privacy regulations (e.g., GDPR, CCPA), and internal standards.

Automation: Develop automated processes for data ingestion, transformation, and validation

to improve efficiency and reduce manual intervention.

Mentorship: Guide and mentor junior data engineers, fostering a culture of technical

excellence and continuous learning.

Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high

availability and reliability of data systems.

Required Qualifications

Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,

or a related field.

Experience: 5+ years of experience in data engineering or a related role, with a proven track

record of building scalable data pipelines and infrastructure.

Technical Skills:

Proficiency in programming languages such as Python, Java, or Scala.

Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).

Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services

(e.g., Redshift, BigQuery, Snowflake).

Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and

data integration frameworks.

Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed

systems.

Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a

plus.

Soft Skills:

Excellent problem-solving and analytical skills.

Strong communication and collaboration abilities.

Ability to work in a fast-paced, dynamic environment and manage multiple priorities.

Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,

Google Professional Data Engineer) or relevant data engineering certifications.

Preferred Qualifica

Experience with real-time data processing and streaming architectures.

Familiarity with machine learning pipelines and MLOps practices.

Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data

pipelines.

Experience in industries with high data complexity, such as finance, healthcare, or

e-commerce.

Work Environment

Location: Hybrid/Remote/On-site (depending on company policy).

Team: Collaborative, cross-functional team environment with data scientists, analysts, and

business stakeholders.

Hours: Full-time, with occasional on-call responsibilities for critical data systems.

Read more
Product company for financial operations automation platform

Product company for financial operations automation platform

Agency job
via Esteem leadership by Suma Raju
Hyderabad
4 - 5 yrs
₹20L - ₹25L / yr
skill iconPython
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconJava
skill iconAmazon Web Services (AWS)

Mandatory Criteria :

  • Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
  • Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
  • Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
  • Candidate should have strong Backend experience.
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.


About the Role


We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.

# Experience with Kubernetes is mandatory.


Key Responsibilities

  • Design and develop scalable, reliable backend services and cloud-native applications.
  • Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
  • Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
  • Implement and manage CI/CD pipelines and infrastructure automation.
  • Collaborate with frontend, DevOps, and product teams in an agile environment.
  • Ensure high code quality through testing, reviews, and documentation.

 

Required Skills

  • Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
  • Expertise in at least one public cloud platform [GCP (Preferred)AWSAzure, or OCI].
  • Proficient in backend programming with PythonJava, or Kotlin (at least one is required).
  • Solid understanding of distributed systems, microservices, and cloud-native architecture.
  • Experience with containerization using Docker and Kubernetes-native deployment workflows.
  • Working knowledge of SQL and relational databases.

  

Preferred Qualifications

  • Experience working across multiple cloud platforms.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.

 

Nice to Have

  • Knowledge of NoSQL databases or event-driven/message-based architectures.
  • Experience with serverless services, managed data pipelines, or data lake platforms.


Read more
HeyCoach
Bengaluru (Bangalore)
0 - 1 yrs
₹1.3L - ₹1.5L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Statistical Analysis
+2 more

About the Role

We are seeking a motivated and knowledgeable Data Science Teaching Assistant Intern to support our academic team in delivering high-quality learning experiences. This role is ideal for someone who enjoys teaching, solving problems, and wants to gain hands-on experience in the EdTech and Data Science domain.


As a Teaching Assistant, you'll help learners understand complex data science topics, resolve doubts, assist during live classes, and contribute to high-quality content development.


Opportunity to receive a Pre-Placement Offer (PPO) based on performance.


Key Responsibilities

 Assist instructors during live classes by providing support and addressing learners queries.

Conduct doubt-solving sessions to help learners grasp difficult concepts in Data Science, Python, Machine Learning, and related topics.

 Contribute to content creation and review, including assignments, quizzes, and learning materials.

 Provide one-on-one academic support and mentoring to learners when needed.

 Ensure a positive and engaging learning environment during sessions.


Requirements

 Bachelor's in Data Science, CSE, Statistics, or a related field

 Strong foundation in Python, Statistics, Machine Learning, and Data Analysis.

 Excellent communication and interpersonal skills.

 Ability to break down technical concepts into simple explanations.

 Prior experience in teaching, mentoring, or assisting is a plus.

 Passionate about education and helping others learn.


Perks

 Hands-on teaching and mentoring experience.

 Exposure to real-time learners interaction and feedback.

 Mentorship from senior instructors and data science professionals.

 Opportunity to receive a Pre-Placement Offer (PPO) based on performance.

Read more
Quanteon Solutions
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
5 - 8 yrs
₹6L - ₹20L / yr
skill iconPython
Automation
Manual testing
Functional testing

We’re looking for a strong QA Engineer with 5+ years hands-on experience in Python to join a fast-paced team and contribute from Day 1.


What you’ll be doing:

🔹 Jump directly into writing Python scripts for web and API automation

🔹 Maintain and extend a Selenium automation framework developed in Python

🔹 Collaborate with developers and product teams to ensure high-quality releases

🔹Own testing for core modules and APIs


Must-Have Skills:

✅ Strong functional QA background

✅ Proficiency in 𝐏𝐲𝐭𝐡𝐨𝐧 (𝐓𝐡𝐢𝐬 𝐢𝐬 𝐚 𝐦𝐮𝐬𝐭)

✅ Hands-on experience with Selenium automation using Python

✅ Ability to work with and manage existing Python-based automation frameworks

✅ Experience in Web and API testing

Read more
HeyCoach
DeepanRaj R
Posted by DeepanRaj R
Bengaluru (Bangalore)
0 - 1 yrs
₹3.5L - ₹4L / yr
skill iconC++
skill iconJava
skill iconPython
Data Structures
Problem solving
+7 more

Location: HSR Sector 6, Bengaluru, India

Job Type: Full-Time - WFO

Work Timings: Wednesday to Sunday: 11:00 AM - 8:00 PM

Monday: 11:00 AM - 5:00 PM Tuesday: Off

Salary: 3.5 - 4 LPA

0-1 year experience


About HeyCoach:

We are an exceptional group of highly skilled individuals, passionate about addressing a fundamental challenge within the education industry. Our team consists of talented geeks who possess a deep understanding of the issues at hand and are dedicated to finding innovative solutions. In our quest for excellence, we are constantly seeking out remarkable individuals who can contribute to our growth and success.

Whether it's developing cutting-edge technologies, designing immersive learning experiences, or implementing groundbreaking teaching methodologies, we consistently strive for excellence.


About the role:

As a Competitive Programming Engineer at HeyCoach, you will play a pivotal role in building the backbone of essential tools that our learners will utilize to excel in interview preparation and competitive programming. This is a full-time position, ideal for individuals who have recently graduated and possess a strong background in Competitive Programming.


Responsibilities:

● Algorithmic Problem Solving: Demonstrate proficiency in solving complex algorithmic problems and challenges.

● Tool Development: Contribute to the design and development of tools that will aid learners in their competitive programming and interview preparation journey.

Educational Content Support: Collaborate with the content development team to provide technical insights and support in creating educational content related to competitive

programming.

● Quality Assurance: Ensure the quality and efficiency of tools and resources developed, with a keen eye for detail and functionality.

● Research and Development: Stay abreast of the latest trends and technologies in competitive programming and problem-solving domains. Contribute to ongoing research initiatives.

Collaborative Teamwork: Work closely with cross-functional teams, including developers, educators, and content creators, to align tool development with educational objectives.


Qualifications:

● Bachelor's degree in Computer Science/Engineering or relevant field.

● Strong experience or knowledge of data structures, algorithms, and competitive programming principles.

● Proficiency in at least one programming language (e.g., Python, Java, C++).

● Excellent problem-solving skills and the ability to translate concepts into practical solutions.

● Recent graduates or candidates with relevant competitive programming internships are encouraged to apply.


Preferred Skills:

● Familiarity with educational technology tools and platforms.

● Passion for enhancing the learning experience for individuals aspiring to crack interviews.

● Effective communication and teamwork skills.

● Mandatory practices on either of the platforms, Leetcode, Codeforces, CodeChef or GeeksforGeeks, Topcode.

Read more
EZSpace Ventures OPC Pvt Ltd
Bhopal
5 - 10 yrs
₹5L - ₹15L / yr
skill iconPython
MERN Stack
Artificial Intelligence (AI)
skill iconMachine Learning (ML)

Job description


Brief Description

One of our client is looking for a Lead Engineer in Bhopal with 5–10 years of experience. Candidates must have strong expertise in Python. Additional experience in AI/ML, MERN Stack, or Full Stack Development is a plus.


Job Description

We are seeking a highly skilled and experienced Lead Engineer – Python AI to join our dynamic team. The ideal candidate will have a strong background in AI technologies, MERN stack, and Python full stack development, with a passion for building scalable and intelligent systems. This role involves leading development efforts, mentoring junior engineers, and collaborating with cross-functional teams to deliver cutting-edge AI-driven solutions.


Key Responsibilities:

  • Lead the design, development, and deployment of AI-powered applications using Python and MERN stack.
  • Architect scalable and maintainable full-stack solutions integrating AI models and data pipelines.
  • Collaborate with data scientists and product teams to integrate machine learning models into production systems.
  • Ensure code quality, performance, and security across all layers of the application.
  • Mentor and guide junior developers, fostering a culture of technical excellence.
  • Stay updated with emerging technologies in AI, data engineering, and full-stack development.
  • Participate in code reviews, sprint planning, and technical discussions.


Required Skills:

  • 5+ years of experience in software development with a strong focus on Python full stack and MERN stack.
  • Hands-on experience with AI technologies, machine learning frameworks (e.g., TensorFlow, PyTorch), and data processing tools.
  • Proficiency in MongoDB, Express.js, React.js, Node.js.
  • Strong understanding of RESTful APIs, microservices architecture, and cloud platforms (AWS, Azure, GCP).
  • Experience with CI/CD pipelines, containerization (Docker), and version control (Git).
  • Excellent problem-solving skills and ability to work in a fast-paced environment.


Education Qualification:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Certifications in AI/ML or Full Stack Development are a plus.


Read more
TalentLo

at TalentLo

2 candid answers
Satyansh A
Posted by Satyansh A
Remote only
0 - 2 yrs
₹1L - ₹1L / yr
NumPy
pandas
skill iconPython
Scikit-Learn

Required Skills:

•           Basic understanding of machine learning concepts and algorithms

•           Proficiency in Python and relevant libraries (NumPy, Pandas, scikit-learn)

•           Familiarity with data preprocessing techniques

•           Knowledge of basic statistical concepts

•           Understanding of model evaluation metrics

•           Basic experience with at least one deep learning framework (TensorFlow, PyTorch)

•           Strong analytical and problem-solving abilities

 

 

Application Process: Create your profile on our platform, submit your portfolio, GitHub profile, or sample projects.

https://www.talentlo.com/

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shrutika SaileshKumar
Posted by Shrutika SaileshKumar
Remote, Bengaluru (Bangalore)
5 - 9 yrs
Best in industry
skill iconPython
SDET
BDD
SQL
Data Warehouse (DWH)
+2 more

Primary skill set: QA Automation, Python, BDD, SQL 

As Senior Data Quality Engineer you will:

  • Evaluate product functionality and create test strategies and test cases to assess product quality.
  • Work closely with the on-shore and the offshore team.
  • Work on multiple reports validation against the databases by running medium to complex SQL queries.
  • Better understanding of Automation Objects and Integrations across various platforms/applications etc.
  • Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
  • Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
  • Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
  • Establish processes and tools set to maintain automation scripts and generate regular test reports.
  • Peer review to provide feedback and to make sure the test scripts are flaw-less.

Core/Must have skills:

  • Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
  • Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
  • Clear & crisp communication and commitment towards deliverables
  • Experience on BigData Testing will be an added advantage.
  • Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.

Good to have skills:

  • Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
  • Ability to effectively articulate technical challenges and solutions
  • Work experience in qTest, Jira, WebDriver.IO


Read more
Deltek
Remote only
7 - 12 yrs
Best in industry
skill iconPython
skill iconJava
skill icon.NET
skill iconReact.js
TypeScript
+1 more

Title - Pncpl Software Engineer

Company Summary :

As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com

Business Summary :

The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.

Principal Software Engineer

Position Responsibilities :

  • Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
  • Develop scalable, performant APIs for Deltek products
  • Accountability for the successful implementation of the requirements by the team.
  • Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
  • Undertake analysis, design, coding and testing activities of complex modules
  • Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
  • Participate in code reviews and provide mentorship to junior developers.
  • Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
  • Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
  • Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
  • Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.

Qualifications :

  • A college degree in Computer Science, Software Engineering, Information Science or a related field is required 
  • Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
  • Experience in backend development and Apache Airflow (or equivalent framework).
  • Build APIs and optimize SQL queries with performance considerations.
  • Experience with Agile Development
  • Experience in writing and maintaining unit tests and using testing frameworks is desirable
  • Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
  • Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
  • The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
  • Strong problem-solving and debugging skills.
  • Ability to work in an Agile environment and collaborate with cross-functional teams.
  • Familiarity with version control systems like Git.
  • Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹9L - ₹10L / yr
skill iconPython
skill iconDjango
FastAPI
Microservices
Large Language Models (LLM)
+12 more

About Us:

MyOperator and Heyo are India’s leading conversational platforms empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.


Role Overview:

We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.


Key Responsibilities:

  • Develop robust backend services using Python, Django, and FastAPI
  • Design and maintain scalable microservices architecture
  • Integrate LangChain/LLMs into AI-powered features
  • Write clean, tested, and maintainable code with pytest
  • Manage and optimize databases (MySQL/Postgres)
  • Deploy and monitor services on AWS
  • Collaborate across teams to define APIs, data flows, and system architecture


Must-Have Skills:

  • Python and Django
  • MySQL or Postgres
  • Microservices architecture
  • AWS (EC2, RDS, Lambda, etc.)
  • Unit testing using pytest
  • LangChain or Large Language Models (LLM)
  • Strong grasp of Data Structures & Algorithms
  • AI coding assistant tools (e.g., Chat GPT & Gemini)


Good to Have:

  • MongoDB or ElasticSearch
  • Go or PHP
  • FastAPI
  • React, Bootstrap (basic frontend support)
  • ETL pipelines, Jenkins, Terraform


Why Join Us?

  • 100% Remote role with a collaborative team
  • Work on AI-first, high-scale SaaS products
  • Drive real impact in a fast-growing tech company
  • Ownership and growth from day one
Read more
Robylon AI

at Robylon AI

2 candid answers
Listings Robylon
Posted by Listings Robylon
Bengaluru (Bangalore)
0 - 2 yrs
₹5L - ₹6L / yr
skill iconPython
Generative AI
Prompt engineering

Role Overview

This is a 20% technical, 80% non-technical role designed for individuals who can blend technical know-how with strong operational and communication skills. You’ll be the bridge between our product and the client’s operations team.


Key Responsibilities


  • Collaborate with clients to co-design SOPs for resolving support queries across channels (chat, ticket, voice)
  • Scope and plan each integration: gather technical and operational requirements and convert them into an executable timeline with measurable success metrics (e.g., coverage %, accuracy, CSAT)
  • Lead integration rollouts and post-launch success loops: monitor performance, debug issues, fine-tune prompts and workflows
  • Conduct quarterly “AI health-checks” and continuously improve system effectiveness
  • Troubleshoot production issues, replicate bugs, ship patches, and write clear root-cause analyses (RCAs)
  • Act as the customer’s voice internally, channel key insights to product and engineering teams


Must-Have Qualifications


  • Engineering degree is a must; Computer Science preferred
  • Past experience in coding and a sound understanding of APIs is preferred
  • Ability to communicate clearly with both technical and non-technical stakeholders
  • Experience working in SaaS, customer success, implementation, or operations roles
  • Analytical mindset with the ability to make data-driven decisions



Read more
DAITA

at DAITA

5 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
3 - 7 yrs
Upto ₹70L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
skill iconRuby on Rails (ROR)
skill iconGo Programming (Golang)
+13 more

Who We Are

DAITA is a German AI start-up. We’re transforming the fashion supply chain with AI-powered agents that automate the mundane, freeing teams to focus on creativity, strategy, and growth.

After a successful research phase spanning 8 countries across 3 continents—gathering insights from Indian cotton fields to German retailers—we’ve secured pre-seed funding and key industry partnerships.

Now, we’re building our MVP to deliver speed, precision, and ease to one of the world’s biggest industries.

We’re set on hypergrowth, aiming to redefine textiles with intelligent, scalable tech—and this is your chance to join the ground floor of something huge.


What You’ll Do

As our Chief Engineer, you’ll lead the technical charge to make our vision real, starting with our MVP in a 3–5 month sprint. You’ll:

  • Design and code an AI-driven/agent system (leveraging machine learning and NLP) with integrated workflow automation to streamline and automate tasks in the textile supply chain, owning it from scratch to finish.
  • Develop backend systems, utilize cutting-edge tools, critically assess manpower needs beyond yourself, oversee a small support team, and drive toward our aggressive launch timeline.
  • Collaborate closely with our founders to align tech with ambitious goals and client input, ensuring automated workflows deliver speed, precision, and ease to textile industry stakeholders.
  • Build an MVP that scales to millions, integrating APIs and data pipelines, using major cloud platforms (AWS, Azure, Google Cloud)—keeping us nimble now and primed for explosive growth later.


What You Bring

  • 2–5 years of experience at high-growth startups or leading tech firms—where you shipped real products, solved complex problems, and moved fast.
  • End-to-end ownership: You've taken tech projects from zero to one—built systems from scratch, made architecture decisions, handled messy edge cases, and delivered under pressure.
  • Team Leadership: 1–3 years leading engineering teams, ideally including recruitment and delivery in India.
  • Technical horsepower: AI Agent Experience, strong across full-stack or backend engineering, ML/NLP integration, cloud architecture, and API/data pipeline development. Experience with workflow automation tools and platforms (e.g., Apache Airflow, UiPath, or similar) to automate processes, ideally in supply chain or textiles. You can code an MVP solo if needed.
  • Resource Clarity: Bring as much technical expertise as possible to build our MVP, and if you can’t own every piece, clearly identify the specific areas where you’ll need team members to deliver on time.
  • Vision Alignment: You think like a builder, taking ownership of the product and team as if it were your own, while partnering closely with the founders to execute their vision with trust and decisiveness.
  • Execution DNA: You ship fast, iterate intelligently, and know when to be scrappy vs. when to be solid.
  • Problem-First Thinking: You’re obsessed with solving real user problems, understanding stakeholder needs beyond just writing beautiful code.
  • High-Energy Leadership: Hands-on, humble, and always ready to jump into the trenches. You lead by doing.
  • Geographical Fit: India-based, ideally with previous exposure to international teams or founders.
  • Values-driven: You live our culture—live in the future, move fast, one team, and character above all.


Why Join Us?

  • Be the technical linchpin of a hypergrowth startup—build the MVP that launches us into the stratosphere.
  • Competitive salary and equity options to negotiate—own a piece of something massive.
  • On-site in our Tiruppur (Tamil Nadu) offices for 2 months with the German founders, to sync with the founders, then remote flexibility long-term.
  • A full-time role demanding full availability—put in the time needed to smash deadlines and reshape the second-biggest industry on Earth with a team that moves fast and rewards hustle.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.


Read more
HaystackAnalytics
Careers Hr
Posted by Careers Hr
Navi Mumbai
1 - 4 yrs
₹6L - ₹12L / yr
skill iconRust
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
+2 more

Position – Python Developer

Location – Navi Mumbai


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Design and implement efficient, scalable backend services using Python.
  • Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions.
  • Build APIs, services, and scripts to support data processing pipelines and front-end applications.
  • Automate recurring tasks and ensure robust integration with cloud services.
  • Maintain high standards of software quality and performance using clean coding principles and testing practices.
  • Collaborate within the team to upskill and unblock each other for faster and better outcomes.





Primary Skills – Python Development

  • Proficient in Python 3 and its ecosystem
  • Frameworks: Flask / Django / FastAPI
  • RESTful API development
  • Understanding of OOPs and SOLID design principles
  • Asynchronous programming (asyncio, aiohttp)
  • Experience with task queues (Celery, RQ)
  • Rust programming experience for systems-level or performance-critical components

Testing & Automation

  • Unit Testing: PyTest / unittest
  • Automation tools: Ansible / Terraform (good to have)
  • CI/CD pipelines

DevOps & Cloud

  • Docker, Kubernetes (basic knowledge expected)
  • Cloud platforms: AWS / Azure / GCP
  • GIT and GitOps workflows
  • Familiarity with containerized deployment & serverless architecture

Bonus Skills

  • Data handling libraries: Pandas / NumPy
  • Experience with scripting: Bash / PowerShell
  • Functional programming concepts
  • Familiarity with front-end integration (REST API usage, JSON handling)

 Other Skills

  • Innovation and thought leadership
  • Interest in learning new tools, languages, workflows
  • Strong communication and collaboration skills
  • Basic understanding of UI/UX principles


To know more about ushttps://haystackanalytics.in




Read more
Wissen Technology

at Wissen Technology

4 recruiters
Poornima Varadarajan
Posted by Poornima Varadarajan
Mumbai
1 - 8 yrs
₹8L - ₹20L / yr
Object Oriented Programming (OOPs)
Data Structures
Algorithms
skill iconPython

Experience in Python (Only Backend), Data structures, Oops, Algorithms, Django, NumPy etc.

• Good understanding of writing Unit Tests using PYTest.

• Good understanding of parsing XML’s and handling files using Python.

• Good understanding with Databases/SQL, procedures and query tuning.

• Service Design Concepts, OO and Functional Development concepts.

• Agile Development Methodologies.

• Strong oral and written communication skills.

• Excellent interpersonal skills and professional approach Skills desired.


Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore), Hyderabad, Pune
4 - 8 yrs
₹20L - ₹25L / yr
Big Data
skill iconAmazon Web Services (AWS)
IaaS
Platform as a Service (PaaS)
VMS
+8 more

Job description

 

Job Title: Cloud Migration Consultant – (AWS to Azure)

 


Experience: 4+ years in application assessment and migration

 

About the Role

 

We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure. You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure, with exposure to AWS as needed.

 

Key Responsibilities

 

  • Assess application readiness and document architecture, dependencies, and migration strategy.
  • Conduct interviews with stakeholders and generate discovery insights using tools like Azure MigrateCloudockItPowerShell.
  • Create architecture diagramsmigration playbooks, and maintain Azure DevOps boards.
  • Set up applications both on-premises and in cloud environments (primarily Azure).
  •  Support proof-of-concepts (PoCs) and advise on migration options.
  •  Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams.
  •  Track progress, blockers, and risks, reporting timely status to project leadership.


Required Skills

 

  • 4+ years of experience in cloud migration and assessment
  •  Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.)
  •  Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3)
  •  Experience with Java (SpringBoot)/C#, .Net/PythonAngular/React.js, REST APIs
  • Working knowledge of KafkaDocker/KubernetesAzure DevOps
  •  Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs)
  •  IAM knowledge: OAuth, SAML, Okta/SiteMinder
  •  Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB


Preferred Qualifications

 

  • Azure or AWS certifications
  •  Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem)
  •  Excellent communication and stakeholder management skills


Educational qualification:

 

B.E/B.Tech/MCA

 

Experience :

 

4+ Years

 

Key Responsibilities

 

  • Assess application readiness and document architecture, dependencies, and migration strategy.
  •  Conduct interviews with stakeholders and generate discovery insights using tools like Azure MigrateCloudockItPowerShell.
  •  Create architecture diagramsmigration playbooks, and maintain Azure DevOps boards.
  •  Set up applications both on-premises and in cloud environments (primarily Azure).
  •  Support proof-of-concepts (PoCs) and advise on migration options.
  •  Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams.
  •  Track progress, blockers, and risks, reporting timely status to project leadership.


Required Skills

 

  • 4+ years of experience in cloud migration and assessment
  •  Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.)
  •  Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3)
  •  Experience with Java (SpringBoot)/C#, .Net/PythonAngular/React.js, REST APIs
  •  Working knowledge of KafkaDocker/KubernetesAzure DevOps
  •  Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs)
  •  IAM knowledge: OAuth, SAML, Okta/SiteMinder
  •  Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB


Preferred Qualifications

 

  • Azure or AWS certifications
  •  Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem)
  •  Excellent communication and stakeholder management skills


Read more
Techno Comp
shravan c
Posted by shravan c
Pune
6 - 8 yrs
₹5L - ₹9L / yr
ADF
Azure Data Factory
skill iconPython
databricks


Job Title: Developer

Work Location: Pune, MH

Skills Required: Azure Data Factory

Experience Range in Required Skills: 6-8 Years

Job Description: Azure, ADF, Databricks, Python

Essential Skills: Azure, ADF, Databricks, Python

Desirable Skills: Azure, ADF, Databricks, Python

Read more
LearnTube.ai

at LearnTube.ai

2 candid answers
Vidhi Solanki
Posted by Vidhi Solanki
Mumbai
2 - 5 yrs
₹8L - ₹18L / yr
skill iconPython
FastAPI
skill iconAmazon Web Services (AWS)
skill iconMongoDB
CI/CD
+5 more

Role Overview:


As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.


What You'll Do:


At LearnTube, we’re pushing the boundaries of Generative AI to revolutionize how the world learns. As a Backend Engineer, your roles and responsibilities will include:

  • Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
  • Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
  • Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
  • Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
  • Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
  • Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
  • Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
  • Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.


What makes you a great fit?


Must-Haves:

  • 2+ yrs Python back-end experience (FastAPI)
  • Strong with Docker & container orchestration
  • Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
  • SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals

Nice-to-Haves

  • k8s at scale, Terraform,
  • Experience with AI/ML inference services (LLMs, vector DBs)
  • Go / Rust for high-perf services
  • Observability: Prometheus, Grafana, OpenTelemetry

About Us: 


At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:

  • AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
  • Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.

Meet the Founders: 


LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.


Why Work With Us? 


At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:

  • Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
  • Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
  • Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
  • Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
  • Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
  • Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.


Read more
Ignite Solutions

at Ignite Solutions

6 recruiters
Eman Khan
Posted by Eman Khan
Remote only
5 - 10 yrs
Best in industry
skill iconPython
skill iconFlask
skill iconDjango
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.


What will you work on?

  • Interface with clients
  • Recommend tech stacks
  • Define end-to-end logical and cloud-native architectures
  • Define APIs
  • Integrate with 3rd party systems
  • Create architectural solution prototypes
  • Hands-on coding, team lead, code reviews, and problem-solving


What Makes You A Great Fit?

  • 5+ years of software experience
  • Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
  • Solid expertise and hands-on experience in Python with Flask or Django
  • Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
  • Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
  • Knowledge of DevOps practices
  • Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
  • Excellent communication skills, verbal and written About Us We offer CTO-as-a-service and Product Development for Startups. We value our employees and provide them an intellectually stimulating environment where everyone’s ideas and contributions are valued. 
Read more
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀


Job description

🔍 Job Title: Data Engineer

📍 Location: Ahmedabad

🚀 Work Mode: On-Site Opportunity

📅 Experience: 4+ Years

🕒 Employment Type: Full-Time

⏱️ Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
Coimbatore
1 - 6 yrs
₹3.4L - ₹6.5L / yr
skill iconJavascript
skill iconHTML/CSS
skill iconPython
skill iconMongoDB

We are seeking a dedicated and skilled Full Stack Web Development Trainer to deliver high-quality, hands-on training to students and professionals. The ideal candidate will be passionate about teaching and capable of training learners in both frontend and backend technologies while also contributing to live development projects.

Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
6 - 9 yrs
₹15L - ₹25L / yr
skill iconPython
Tableau
skill iconData Analytics
Google Cloud Platform (GCP)
PowerBI
+2 more

Job Overview:


  • JD of DATA ANALYST:



  • Strong proficiency in Python programming.
  • Preferred knowledge of cloud technologies, especially in Google Cloud Platform (GCP).
  • Experience with visualization tools such as Grafana, PowerBI, and Tableau.
  • Good to have knowledge of AI/ML models.
  • Must have extensive knowledge in Python analytics, particularly in exploratory data analysis (EDA).
Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Pune
3 - 6 yrs
₹14L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
Scikit-Learn
XGBoost
+6 more

Job Overview

  • Level 1-Previous working experience as a Data Scientist minimum 5 years

  • Level 2-Previous working experience as a Data Scientist for 3 to 5 years

  • •In-depth knowledge of Agile process and principles
  • •Outstanding communication, presentation, and leadership skills
  • •Excellent organizational and time management skills
  • •Sharp analytical and problem-solving skills
  • •Creative thinker with a vision
  • •Flexibility / capacity of adaptation
  • •Presentation skills (project reviews with customers and top management)
  • •Interest in industrial & automotive topics
  • •Fluent in English
  • •Ability to work in international teams

  • •Engineering degree with strong background in mathematics and computer science. A PhD in a quantitative field and/or a minimum of 3 years of experience in machine learning is a plus.
  • •Excellent understanding of traditional machine learning techniques and algorithms, such as k-NN, SVM, Random Forests, etc.
  • •Understanding of deep learning techniques
  • •Understanding and, ideally, experience with Reinforcement Learning methods
  • •Experience using ML, DL frameworks (Scikit-learn, XGBoost, TensorFlow, Keras, MXNet, etc.)
  • •Proficiency in at least one programming language (preferably python)
  • •Experience with SQL and NoSQL databases
  • •Excellent verbal and written skills in English is mandatory Engineering degree.

  • Appreciated extra skills
  • •Experience in signal and image processing
  • •Experience in forecasting and time series modeling
  • •Experience with computer vision libraries like OpenCV
  • •Experience using cloud platforms
  • •Experience with versioning control systems (git)
  • •Interest in IoT and hardware adapted to ML tasks


Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Pune
5 - 8 yrs
₹18L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Computer Vision
Artificial Intelligence (AI)
skill iconPython
+9 more

Job Overview

  • o Min. 5 years of experience with development in Computer vision, Machine Learning, Deep Learning and associated implementation  of algorithms
  • oKnowledge and experience in 
  • -Data Science/Data Analysis techniques 
  • -Hands on experience of programming in Python, R and MATLAB or Octave
  • -Python Frameworks for AI such as TensorFlow, PySpark, Theano etc. 
  • & libraries like PyTorch, Pandas, Numpy, etc.
  • -Algorithms such as Regression, SVM, Decision tree, KNN and Neural Networks
  • Skills & Attributes: 
  • oFast learner and Problem solving
  • oInnovative thinking
  • oExcellent communication skills 
  • oIntegrity, accountability and transparency 
  • oInternational working mindset 


Read more
IT Company

IT Company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
3 - 6 yrs
₹18L - ₹20L / yr
PySpark
skill iconData Science
skill iconPython
NumPy
Generative AI
+8 more

Job Overview : Data scientist (AI/ML)


  • 3 TO 6 years experience in AI/ML 
  • Programming languages: Python, SQL, NoSQL
  • Frameworks: Spark(Pyspark), Scikit-learn, Scipy, Numpy, NLTK
  • DL Frameworks : Tensotflow, Pytorch, LLMs(Transformers/deepseek/ llama), huggingface, llm deployment and inference
  • Gen AI Framework: Langchain
  • Cloud:AWS
  • Tools: Tableau, Grafana

 


  • LLM, GENAI, OCR(optical character recognition)
  •  
  • Notice Period: Immediate to 15 Days
Read more
InvestPulse

at InvestPulse

2 candid answers
1 product
Invest Pulse
Posted by Invest Pulse
Remote only
2 - 5 yrs
₹3L - ₹6L / yr
skill iconPython
Langchaing
CrewAI
skill iconReact.js
skill iconPostgreSQL
+5 more

LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.

We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.


🛠️ What You’ll Be Building

As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:

  • Document ingestion and OCR processing (Textract, Document AI)
  • AI agent workflows using LangChain or CrewAI
  • Serviceability calculators with business rule engines
  • React + Next.js frontend for brokers and analysts
  • FastAPI backend with PostgreSQL
  • Security, encryption, audit logging (privacy-first design)


🎯 We’re Looking For:

Must-Have Skills:

  • Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
  • Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
  • Frontend skills in React.js / Next.js
  • Experience with PostgreSQL and cloud storage (AWS/GCP)
  • Understanding of financial documents and data privacy best practices

Bonus Points:

  • Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
  • Building ML/NLP pipelines in real-world apps
  • Prior work in fintech, lending, or proptech sectors


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort