Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
Wama Technology

at Wama Technology

2 candid answers
Ariba Khan
Posted by Ariba Khan
Thane, Navi Mumbai
4 - 6 yrs
Upto ₹18L / yr (Varies
)
skill iconPython
skill iconPostgreSQL
skill iconMongoDB
Artificial Intelligence (AI)
skill iconDocker

About the Role:

We are building cutting-edge AI products designed for enterprise-scale applications and arelooking for a Senior Python Developer to join our core engineering team. You will beresponsible for designing and delivering robust, scalable backend systems that power ouradvanced AI solutions.


Key Responsibilities:

  • Design, develop, and maintain scalable Python-based backend applications and services.
  • Collaborate with AI/ML teams to integrate machine learning models into production environments.
  • Optimize applications for performance, reliability, and security.
  • Write clean, maintainable, and testable code following best practices.
  • Work with cross-functional teams including Data Science, DevOps, and UI/UX to ensure seamless delivery.
  • Participate in code reviews, architecture discussions, and technical decision-making.
  • Troubleshoot, debug, and upgrade existing systems.


Required Skills & Experience:

  • Minimum 5 years of professional Python development experience.
  • Strong expertise in Django / Flask / FastAPI.
  • Hands-on experience with REST APIs, microservices, and event-driven architecture.
  • Solid understanding of databases (PostgreSQL, MySQL, MongoDB, Redis).
  • Familiarity with cloud platforms (AWS / Azure / GCP) and CI/CD pipelines.
  • Experience with AI/ML pipeline integration is a strong plus.
  • Strong problem-solving and debugging skills.
  • Excellent communication skills and ability to work in a collaborative environment.


Good to Have:

  • Experience with Docker, Kubernetes.
  • Exposure to message brokers (RabbitMQ, Kafka).
  • Knowledge of data engineering tools (Airflow, Spark).
  • Familiarity with Neo4j or other graph databases.
Read more
Sigmoid

at Sigmoid

1 video
4 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3 - 5 yrs
Upto ₹25L / yr (Varies
)
PySpark
SQL
skill iconPython
Windows Azure
skill iconAmazon Web Services (AWS)
+2 more

You will be responsible for building a highly-scalable and extensible robust application. This position reports to the Engineering Manager.


Responsibilities:

  • Align Sigmoid with key Client initiatives
  • Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
  • Ability to understand business requirements and tie them to technology solutions
  • Open to work from client location as per the demand of the project / customer.
  • Facilitate in Technical Aspects
  • Develop and evolve highly scalable and fault-tolerant distributed components using Java technologies.
  • Excellent experience in Application development and support, integration development and quality assurance.
  • Provide technical leadership and manage it day to day basis
  • Interface daily with customers across leading Fortune 500 companies to understand strategic requirements
  • Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid
  • Hands on coder with good understanding on enterprise level code.
  • Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
  • Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
  • Culture
  • Must be a strategic thinker with the ability to think unconventional / out:of:box.
  • Analytical and solution driven orientation.
  • Raw intellect, talent and energy are critical.
  • Entrepreneurial and Agile : understands the demands of a private, high growth company.
  • Ability to be both a leader and hands on "doer".

 

Qualifications: -

  • 3-5 year track record of relevant work experience and a computer Science or a related technical discipline is required
  • Experience in development of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, and propose comprehensive solutions.
  • Experience with functional and object-oriented programming, Java (Preferred) or Python is a must.
  • Hand-On knowledge in Map Reduce, Hadoop, PySpark, Hbase and ElasticSearch.
  • Development and support experience in Big Data domain
  • Experience with database modelling and development, data mining and warehousing.
  • Unit, Integration and User Acceptance Testing.
  • Effective communication skills (both written and verbal)
  • Ability to collaborate with a diverse set of engineers, data scientists and product managers
  • Comfort in a fast-paced start-up environment.

 

Preferred Qualification:

  • Experience in Agile methodology.
  • Proficient with SQL and its variation among popular databases.
  • Experience working with large, complex data sets from a variety of sources.
Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Shivani Pandey
Posted by Shivani Pandey
Remote only
4 - 10 yrs
₹5L - ₹15L / yr
skill iconPython
2D Geometry Concept
3D Geometry Concept
NumPy
SciPy
+9 more

Job Title: Python Developer

Experience Level: 4+ years

 

Job Summary:

We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.


Key Responsibilities:

·       Design, develop, and maintain robust and scalable APIs using Python.

·       Work with geometric data structures and algorithms (2D/3D).

·       Collaborate with cross-functional teams including front-end developers, designers, and product managers.

·       Optimize code for performance and scalability.

·       Write unit and integration tests to ensure code quality.

·       Participate in code reviews and contribute to best practices.

 

Required Skills:

·       Strong proficiency in Python.

·       Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).

·       Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.

·       Familiarity with libraries such as NumPySciPyShapelyOpen3D, or PyMesh.

·       Experience with version control systems (e.g., Git).

·       Strong problem-solving and analytical skills.

 

Good to Have:

·       Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).

·       Knowledge of mathematical modeling or simulation.

·       Exposure to cloud platforms (AWS, Azure, GCP).

·       Familiarity with CI/CD pipelines.

 

Education:

·       Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.

Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹23L / yr (Varies
)
skill iconPython
Logistic regression
SQL
Credit Risk
skill iconAmazon Web Services (AWS)
+3 more

Domain - Credit risk / Fintech 

    

Roles and Responsibilities: 

1. Development, validation and monitoring of Application and Behaviour score cards

for Retail loan portfolio 

2. Improvement of collection efficiency through advanced analytics

3. Development and deployment of fraud scorecard 

4. Upsell / Cross-sell strategy implementation using analytics 

5. Create modern data pipelines and processing using AWS PAAS components (Glue,

Sagemaker studio, etc.) 

6. Deploying software using CI/CD tools such as Azure DevOps, Jenkins, etc.

7. Experience with API tools such as REST, Swagger, and Postman

8. Model deployment in AWS and management of production environment

9. Team player who can work with cross-functional teams to gather data and derive

insights 


Mandatory Technical skill set : 

1. Previous experience in scorecard development and credit risk strategy development 

2. Python and Jenkins

3. Logistic regression, Scorecard, ML and neural networks 

4. Statistical analysis and A/B testing

5. AWS Sagemaker, S3 , Ec2, Dockers 

6. REST API, Swagger and Postman

7. Excel

8. SQL 

9. Visualisation tools such as Redash / Grafana 

10. Bitbucket, Githubs and versioning tools

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
2 - 4 yrs
Upto ₹16L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Key Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Collaborate with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Design and deliver the mission-critical stack, focusing on security, resiliency, scale, and performance.
  • Take ownership of end-to-end performance and operability.
  • Apply strong knowledge of automation and orchestration principles.
  • Serve as the ultimate escalation point for complex or critical issues not yet documented as Standard Operating Procedures (SOPs).
  • Troubleshoot and define mitigations using a deep understanding of service topology and dependencies.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 2+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience in Infrastructure as Code (IAC), preferably using Terraform.
  • Proficiency in Python or Node.js, with experience designing RESTful APIs and working in microservices architecture.
  • Solid expertise in AWS cloud infrastructure and platform technologies including APIs, distributed systems, and microservices.
  • Hands-on experience with observability stacks, including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools (e.g., CircleCI) and performance testing tools like K6.
  • Passion for bringing automation and standardization to engineering operations.
  • Ability to build high-performance APIs with low latency (<200ms).
  • Ability to work in a fast-paced environment, learning from peers and leaders.
  • Demonstrated ability to mentor other engineers and contribute to team growth, including participation in recruiting activities.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, or Datadog.
  • Experience building Internal Developer Platforms (IDPs) or reusable frameworks for engineering teams.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (e.g., SOC2, HIPAA).


Read more
Tecblic Private LImited
Priya Khatri
Posted by Priya Khatri
Ahmedabad
2 - 3 yrs
₹1L - ₹4L / yr
skill iconPython
skill iconDjango
FastAPI
Large Language Models (LLM)
Natural Language Processing (NLP)
+9 more

Job Profile : Python Developer

Job Location : Ahmedabad, Gujarat - On site

Job Type : Full time

Experience - 1-3 Years

 

Key Responsibilities:

  •  Design, develop, and maintain Python-based applications and services.
  •  Collaborate with cross-functional teams to define, design, and ship new features.
  •  Write clean, maintainable, and efficient code following best practices.
  •  Optimize applications for maximum speed and scalability.
  •  Troubleshoot, debug, and upgrade existing systems.
  •  Integrate user-facing elements with server-side logic.
  •  Implement security and data protection measures.
  •  Work with databases (SQL/NoSQL) and integrate data storage solutions.
  •  Participate in code reviews to ensure code quality and share knowledge with the team.
  •  Stay up-to-date with emerging technologies and industry trends.


 Requirements:

  •  1-3 years of professional experience in Python development.
  •  Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
  •  Experience with RESTful APIs and web services.
  •  Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
  •  Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
  •  Experience with version control systems (e.g., Git).
  •  Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
  •  Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
  •  Strong problem-solving skills and attention to detail.
  •  Excellent communication and teamwork skills.


 Good to Have:

  •  Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
  •  Knowledge of asynchronous programming and event-driven architecture.
  •  Familiarity with CI/CD pipelines and DevOps practices.
  •  Experience with microservices architecture.
  •  Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
  •  Hands on experience in RAG and LLM model intergration would be surplus.
Read more
CybeSigma Consulting Services

at CybeSigma Consulting Services

2 candid answers
Anto Alexander
Posted by Anto Alexander
4th Floor, Majestic Signia, 405, Plot No. A-27, Block A, Industrial Area, Sector 62, Noida, Uttar Pradesh 201309
3 - 5 yrs
₹4L - ₹6L / yr
Bitcoin
Ethereum
Truffle (Ethereum framework)
hardhat
skill iconPython
+8 more

About the Role

We are seeking a skilled Blockchain Developer with proven experience in crypto wallet engine development and integration. The ideal candidate will design, build, and maintain secure blockchain-based applications, focusing on wallet infrastructure, smart contracts, and decentralized transaction flows.

Key Responsibilities

• Design, develop, and maintain crypto wallet engines supporting multi-currency storage, transactions, and key management.

• Integrate blockchain nodes and APIs for popular networks (e.g., Bitcoin, Ethereum, Polygon, Solana, Binance Smart Chain).

• Implement secure wallet functionalities including address generation, transaction signing, and UTXO management.

• Develop smart contracts and DApps using Solidity, Rust, or other blockchain languages.

• Work on on-chain/off-chain data synchronization, encryption mechanisms, and transaction validation.

• Optimize system performance and ensure security compliance for all wallet-related operations.

• Collaborate with backend, frontend, and DevOps teams to deploy blockchain applications at scale.

• Conduct code reviews, implement automated testing, and maintain robust documentation.

Technical Skills

• Blockchain Platforms: Ethereum, Bitcoin, Polygon, Solana, Binance Smart Chain

• Languages: Solidity, Go, Python, Node.js, Rust

• Tools & Frameworks: Truffle, Hardhat, Remix, Web3.js, Ethers.js

• Crypto Wallets: HD Wallets, Multi-Signature Wallets, MPC Wallets

• Databases: MongoDB, PostgreSQL, Redis • Version Control: Git, GitHub, GitLab

• Security: Encryption standards, HSM, Key Management, Transaction Signing

• APIs & Integrations: RESTful APIs, GraphQL, WebSocket, Blockchain Node APIs

Read more
Slooze

at Slooze

2 candid answers
Hari Krishna
Posted by Hari Krishna
Remote, Coimbatore
0 - 4 yrs
₹2L - ₹12L / yr
skill iconPython
GraphQL
crawlers
data pipeline
Integration

About the Role

We’re looking for a Data Engineer who can turn messy, unstructured information into clean, usable insights. You’ll be building crawlers, integrating APIs, and setting up data flows that power our analytics and AI layers. If you love data plumbing as much as data puzzles — this role is for you.

 

Responsibilities

- Build and maintain Python-based data pipelines, crawlers, and integrations with 3rd-party APIs.

Perform brute-force analytics and exploratory data work on crawled datasets to surface trends and anomalies.

- Develop and maintain ETL workflows — from raw ingestion to clean, structured outputs.

- Collaborate with product and ML teams to make data discoverable, queryable, and actionable.

- Optimize data collection for performance, reliability, and scalability.

 

Requirements

- Strong proficiency in Python and Jupyter notebooks.

- Experience building web crawlers / scrapers and integrating with REST / GraphQL APIs.

- Solid understanding of data structures and algorithms (DSA).

- Comfort with quick, hands-on analytics — slicing and validating data directly.

 

Good to Have

- Experience with schema design and database modeling.

- Exposure to both SQL and NoSQL databases; familiarity with vector databases is a plus.

- Knowledge of data orchestration tools (Dagster preferred).

- Understanding of data lifecycle management — from raw to enriched layers.

 

Why Join Us

We’re not offering employment — we’re offering ownership.

If you’re here for a job, this isn’t your place.

We’re building the data spine of a new-age supply chain intelligence platform — and we need people who can crush constraints, move fast, and make impossible things work.

You’ll have room to think, build, break, and reinvent — not follow.

If you thrive in chaos and create clarity, you’ll fit right in.

 

Screening Challenge

 

Before we schedule a call, we have an exciting challenge for you. Please go through the link below and submit your solution to us with the Subject line: SUB: [Role] [Full Name]

 

Link - https://dub.sh/slooze-takehome

Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 1 yrs
₹0.2L - ₹1L / yr
Google colab
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
skill iconPython
PyTorch
+3 more

About Synorus

Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.

If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.


Role Overview

We are seeking passionate AI/LLM Engineering Interns who can:

  • Fine-tune LLMs for legal domain use-cases
  • Train and experiment with open-source foundation models
  • Work with large datasets efficiently
  • Build RAG pipelines and text-processing frameworks
  • Run model training workflows on Google Colab / Kaggle / Cloud GPUs

This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.

Key Responsibilities

  • Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
  • Build and preprocess legal datasets at scale
  • Develop efficient inference & training pipelines
  • Evaluate models for accuracy, hallucinations, and trustworthiness
  • Implement RAG architectures (vector DBs + embeddings)
  • Work with GPU environments (Colab/Kaggle/Cloud)
  • Contribute to model improvements, prompt engineering & safety tuning

Must-Have Skills

  • Strong knowledge of Python & PyTorch
  • Understanding of LLMs, Transformers, Tokenization
  • Hands-on experience with HuggingFace Transformers
  • Familiarity with LoRA/QLoRA, PEFT training
  • Data wrangling: Pandas, NumPy, tokenizers
  • Ability to handle multi-GB datasets efficiently

Bonus Skills

(Not mandatory — but a strong plus)

  • Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
  • Familiarity with vLLM, llama.cpp, GGUF
  • Worked on summarization, Q&A or document-AI projects
  • Knowledge of legal texts (Indian laws/case-law/statutes)
  • Open-source contributions or research work

What You Will Gain

  • Real-world training on LLM fine-tuning & legal AI
  • Exposure to production-grade AI pipelines
  • Direct mentorship from engineering leadership
  • Research + industry project portfolio
  • Letter of experience + potential full-time offer

Ideal Candidate

  • You experiment with models on weekends
  • You love pushing GPUs to their limits
  • You prefer research + implementation over theory alone
  • You want to build AI that matters — not just demos


Location - Remote

Stipend - 5K - 10K

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 3 yrs
₹6L - ₹11L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
Open AI
LLM API

Job Title: Software Engineer

Location: Bengaluru 

Experience: 1-3 Years 

Working Days: 5 Days

About the Role:

We are reimagining how enterprises in BFSI and healthcare interact with documents and workflows through AI-first platforms. Our focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and Human-in-the-Loop (HITL) automation to transform credit decisioning, underwriting, and compliance operations.

Role Overview:

As a Software Engineer, you’ll design and build next-generation AI systems and scalable backend platforms. You’ll collaborate with ML, product, and data teams to develop LLM-powered microservices, APIs, and document intelligence tools that process unstructured data (PDFs, images, HTML) securely and efficiently.

Key Responsibilities:

  • Design, develop, and optimize scalable backend systems and APIs for AI and document workflows.
  • Collaborate with cross-functional teams to integrate LLM agents and deploy GenAI-based microservices.
  • Build tools to structure unstructured data for downstream decisioning logic.
  • Ensure security, performance, and reliability of systems handling sensitive financial and healthcare data.
  • Take ownership of modules end-to-end—from concept to production rollout and monitoring.

Tech Stack:

  • Languages: Python, TypeScript, JavaScript
  • Frameworks: Node.js, React.js, LangChain
  • AI & ML Tools: OpenAI APIs, OCR (Tesseract, AWS Textract), Pandas, spaCy, FinBERT, LLMs (GPT, Claude)
  • Infra & DevOps: AWS, GCP, Docker, Kubernetes, PostgreSQL, Redis, GitHub Actions, Datadog, Grafana

You’ll Excel If You Have:

  • 1–3 years of experience in backend or full-stack development
  • Experience with unstructured data, PDFs, or document-heavy systems
  • Exposure to GenAI/LLM APIs (OpenAI, Claude, etc.)
  • Strong product mindset and ability to ship scalable, usable features in fast-paced environments


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Gurugram, Hyderabad, Kochi (Cochin), Noida, Thiruvananthapuram
6 - 9 yrs
₹10L - ₹25L / yr
skill iconPython
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconJava
skill iconAmazon Web Services (AWS)
+9 more

Job Description

Java or Python, Angular Front end, basic AWS (current project)

 

Nice to Haves

Experience with other modern JavaScript frameworks like Vue.js.

Exposure to containerization and orchestration technologies such as Docker and Kubernetes.

Knowledge of CI/CD best practices and tools (e.g., Jenkins, GitLab CI, CircleCI).

Familiarity with cloud-native architecture and microservices design.

Strong problem-solving skills and a collaborative mindset.

Experience working in fast-paced, dynamic startup or product-driven environments.

 

Must-Haves

Minimum of 5+ years of professional experience in the below skills:

Java or Python, Angular Front end, basic AWS (this skills should be visible in his current project)

Notice period - 0 to 15days only

Work locations: Bangalore, Chennai, Gurgaon, Hyderabad, Kochi, Noida, Thiruvananthapuram , India


(Virtual interview mode)

Interview Date: 15th Nov

 

Additional Guidelines

Interview process: - 2 Technical round + 1 Client round

3 days in office, Hybrid model.

Read more
Daten  Wissen Pvt Ltd
Mumbai, BHAYANDER, Thane
1 - 2 yrs
₹2L - ₹4L / yr
skill iconDjango
RESTful APIs
DRF
skill iconPython
skill iconAmazon Web Services (AWS)
+1 more

About the Role:


We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.


Key Responsibilities:

  • Develop and maintain Python-based web applications using Django and Django Rest Framework.
  • Build and integrate RESTful APIs.
  • Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
  • Contribute to improving development workflows through automation.
  • Assist in deploying applications using cloud platforms like Heroku or AWS.
  • Write clean, maintainable, and efficient code.


Requirements:

Backend:

  • Strong understanding of Django and Django Rest Framework (DRF).
  • Experience with task queues like Celery.


Frontend (Basic Understanding):

  • Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.


Hosting & Deployment:

  • Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.

Linux/Server Knowledge:

  • Basic to intermediate understanding of Linux commands and server environments.
  • Ability to work with terminal, virtual environments, SSH, and basic server configurations.

Python Knowledge:

  • Good grasp of OOP concepts.
  • Familiarity with Pandas for data manipulation is a plus.

Soft & Team Skills:

  • Strong collaboration and team management abilities.
  • Ability to work in a team-driven environment and coordinate tasks smoothly.
  • Problem-solving mindset and attention to detail.
  • Good communication skills and eagerness to learn

What We Offer:

  • A collaborative, friendly, and growth-focused work environment.
  • Opportunity to work on real-time projects using modern technologies.
  • Guidance and mentorship to help you advance in your career.
  • Flexible and supportive work culture.
  • Opportunities for continuous learning and skill development.


Read more
Pune
3 - 7 yrs
₹7L - ₹10L / yr
skill iconPython
Google Cloud Platform (GCP)
skill iconMongoDB
grpc
RabbitMQ
+3 more

Advanced Backend Development: Design, build, and maintain efficient, reusable, and reliable Python code. Develop complex backend services using FastAPI, MongoDB, and Postgres.

Microservices Architecture Design: Lead the design and implementation of a scalable microservices architecture, ensuring systems are robust and reliable.

Database Management and Optimization: Oversee and optimize the performance of MongoDB and Postgres databases, ensuring data integrity and security.

Message Broker Implementation: Implement and manage sophisticated message broker systems like RabbitMQ or Kafka for asynchronous processing and inter-service communication.

Git and Version Control Expertise: Utilize Git for sophisticated source code management. Lead code reviews and maintain high standards in code quality.

Project and Team Management: Manage backend development projects, coordinating with cross-functional teams. Mentor junior developers and contribute to team growth and skill development. Cloud Infrastructure Management: Extensive work with cloud services, specifically Google Cloud Platform (GCP), for deployment, scaling, and management of applications.

Performance Tuning and Optimization: Focus on optimizing applications for maximum speed, efficiency, and scalability.

Unit Testing and Quality Assurance: Develop and maintain thorough unit tests for all developed code. Lead initiatives in test-driven development (TDD) to ensure code quality and reliability.

 Security Best Practices: Implement and advocate for security best practices, data protection protocols, and compliance standards across all backend services.

Read more
AI- Company

AI- Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹15L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+13 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • Minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/ QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows


Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.


Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.


Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
Gaming Industry

Gaming Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 11 yrs
₹20L - ₹65L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconHTML/CSS
Mobile App Development
skill iconPHP
+9 more

REVIEW CRITERIA:

  • Strong Senior Lead Engineer Profile
  • Minimum 8+ years of total software engineering experience (Preferably in Native Mobile Development)
  • Minimum 3+ years of backend development experience (any language like Python, PHP, Golang, Ruby etc.)
  • Minimum 1+ years of frontend development experience (any language like JavaScript, reach, angular etc.)
  • Minimum 1+ years in a technical leadership role, leading and mentoring engineers (more than 5 team size)
  • Strong programming fundamentals with practical coding experience, code reviews, TDD, and clean code principles


PREFERRED:

  • Product companies
  • Experience in Unity and C#


JOB SPECIFIC CRITERIA:

  • CV Attachment is mandatory
  • Over a 6–9-month period, you'll immerse yourself in game development, Unity, and C# to become a well-rounded technical leader in the gaming space. Are you okay with learning game development?
  • Are you open to work for 6 days in WFO mode?
  • Are you open to timings: 10:00 am to 8:00 pm (Monday to Friday), Saturday 12:00 pm to 6:00 pm?
  • Are you okay for hands-on coding (this role requires 30% coding and 70% team management)?
  • What is the team size you have managed?


ROLE & RESPONSIBILITIES:

You'll work closely with our team to implement best practices, improve our architecture, and create a high-performance engineering culture. Over a 6–9-month period, you'll also immerse yourself in game development, Unity, and C# to become a well-rounded technical leader in the gaming space.


  • Drive maximum development velocity through direct involvement in development sprints, ensuring developers work as efficiently and effectively as possible.
  • Lead and mentor a team of engineers, fostering a culture of technical excellence and continuous improvement.
  • Drive architectural decisions that ensure scalable, maintainable, and high-performance game products.
  • Foster a high-performance engineering culture aligned with ambitious goals, accountability, and proactive problem-solving.
  • Implement and enforce engineering best practices (e.g., code reviews, testing, documentation) and the adoption of new tools, technologies including AI, and methodologies to optimize team efficiency.
  • Transition our team to a high-performance culture aligned with our ambitious, venture-backed goals.


IDEAL CANDIDATE:

  • 8+ years of software engineering experience with at least 3+ years in a technical leadership role
  • Ability to reasonably estimate and plan tasks and features.
  • Strong programming fundamentals and hands-on coding abilities
  • Strong grasp of software architecture, TDD, code reviews, and clean coding principles.
  • Proficient at profiling games to identify bottlenecks and performance issues.
  • Experience building complex, scalable software systems
  • Proven track record of driving architectural decisions and technical excellence
  • Experience mentoring and developing engineering talent
  • Strong problem-solving skills and attention to detail
  • Excellent communication skills and ability to explain complex technical concepts
  • Experience with agile development methodologies
  • Bachelor's degree in computer science, Engineering, or equivalent practical experience
Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1 - 2 yrs
₹3L - ₹4L / yr
skill iconPython
FastAPI
skill iconFlask

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.



Why Palcode.ai


Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data

High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday

Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions

Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment

Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions

Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software


Your Role:

  • Design and build our core AI services and APIs using Python
  • Create reliable, scalable backend systems that handle complex data
  • Help set up cloud infrastructure and deployment pipelines
  • Collaborate with our AI team to integrate machine learning models
  • Write clean, tested, production-ready code


You'll fit right in if:

  • You have 1 year of hands-on Python development experience
  • You're comfortable with full-stack development and cloud services
  • You write clean, maintainable code and follow good engineering practices
  • You're curious about AI/ML and eager to learn new technologies
  • You enjoy fast-paced startup environments and take ownership of your work


How we will set you up for success

  • You will work closely with the Founding team to understand what we are building.
  • You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
  • You will be involved in a monthly one-on-one with the founders to discuss feedback
  • A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
  • You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.


Location: Bangalore, Remote


Compensation: Competitive salary + Meaningful equity


If you get excited about solving hard problems that have real-world impact, we should talk.


All the best!!

Read more
Remote only
0 - 3 yrs
₹0.20000000000000004 - ₹1.2 / mo
PyTorch
TensorFlow
OpenCV
FFmpeg
skill iconDeep Learning
+11 more

About the Role

We are looking for a passionate AI Engineer Intern (B.Tech, M.Tech / M.S. or equivalent) with strong foundations in Artificial Intelligence, Computer Vision, and Deep Learning to join our R&D team.

You will help us build and train realistic face-swap and deepfake video models, powering the next generation of AI-driven video synthesis technology.

This is a remote, individual-contributor role offering exposure to cutting-edge AI model development in a startup-like environment.


Key Responsibilities

  • Research, implement, and fine-tune face-swap / deepfake architectures (e.g., FaceSwap, SimSwap, DeepFaceLab, LatentSync, Wav2Lip).
  • Train and optimize models for realistic facial reenactment and temporal consistency.
  • Work with GANs, VAEs, and diffusion models for video synthesis.
  • Handle dataset creation, cleaning, and augmentation for face-video tasks.
  • Collaborate with the AI core team to deploy trained models in production environments.
  • Maintain clean, modular, and reproducible pipelines using Git and experiment-tracking tools.

Required Qualifications

  • B.Tech, M.Tech / M.S. (or equivalent) in AI / ML / Computer Vision / Deep Learning.
  • Certifications in AI or Deep Learning (DeepLearning.AI, NVIDIA DLI, Coursera, etc.).
  • Proficiency in PyTorch or TensorFlow, OpenCV, FFmpeg.
  • Understanding of CNNs, Autoencoders, GANs, Diffusion Models.
  • Familiarity with datasets like CelebA, VoxCeleb, FFHQ, DFDC, etc.
  • Good grasp of data preprocessing, model evaluation, and performance tuning.

Preferred Skills

  • Prior hands-on experience with face-swap or lip-sync frameworks.
  • Exposure to 3D morphable models, NeRF, motion transfer, or facial landmark tracking.
  • Knowledge of multi-GPU training and model optimization.
  • Familiarity with Rust / Python backend integration for inference pipelines.

What We Offer

  • Work directly on production-grade AI video synthesis systems.
  • Remote-first, flexible working hours.
  • Mentorship from senior AI researchers and engineers.
  • Opportunity to transition into a full-time role upon outstanding performance.


Location: Remote | Stipend: ₹10,000/month | Duration: 3–6 months

Read more
Remote only
5 - 20 yrs
₹12L - ₹25L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
Generative AI
Large Language Models (LLM)

We are building an AI-powered chatbot platform and looking for an AI/ML Engineer with strong backend skills as our first technical hire. You will be responsible for developing the core chatbot engine using LLMs, creating backend APIs, and building scalable RAG pipelines.

You should be comfortable working independently, shipping fast, and turning ideas into real product features. This role is ideal for someone who loves building with modern AI tools and wants to be part of a fast-growing product from day one.

Responsibilities

• Build the core AI chatbot engine using LLMs (OpenAI, Claude, Gemini, Llama etc.)

• Develop backend services and APIs using Python (FastAPI/Flask)

• Create RAG pipelines using vector databases (Pinecone, FAISS, Chroma)

• Implement embeddings, prompt flows, and conversation logic

• Integrate chatbot with web apps, WhatsApp, CRMs and 3rd-party APIs

• Ensure system reliability, performance, and scalability

• Work directly with the founder in shaping the product and roadmap

Requirements

• Strong experience with LLMs & Generative AI

• Excellent Python skills with FastAPI/Flask

• Hands-on experience with LangChain or RAG architectures

• Vector database experience (Pinecone/FAISS/Chroma)

• Strong understanding of REST APIs and backend development

• Ability to work independently, experiment fast, and deliver clean code

Nice to Have

• Experience with cloud (AWS/GCP)

• Node.js knowledge

• LangGraph, LlamaIndex

• MLOps or deployment experience


Read more
Growing AdTech start-up delivering innovative solutions

Growing AdTech start-up delivering innovative solutions

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹15L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
Amazon VPC
+20 more

Review Criteria

  • Strong Senior/Lead DevOps Engineer Profile
  • 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
  • Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
  • Solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
  • Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
  • Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
  • Experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
  • Good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
  • Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
  • Its an IC role

 

Preferred

  • Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
  • Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
  • Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Are you okay to come F2F for HM Interview round?
  • Reason for Change?
  • Provide CTC Breakup?
  • Please provide Career Summary / Skills?
  • How many years of experience you have in AWS?


Role & Responsibilities

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.

 

Key Responsibilities:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.

CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.

Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.

Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.

Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.

Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


Ideal Candidate

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills


Read more
Clink

at Clink

2 candid answers
1 product
Hari Krishna
Posted by Hari Krishna
Bengaluru (Bangalore), Hyderabad
2 - 4 yrs
₹8L - ₹12L / yr
Database Design
Systems design
Relational Database (RDBMS)
skill iconPython
skill iconRuby
+2 more

Role Overview:

We’re looking for an exceptionally passionate, logical, and smart Backend Developer to join our core tech team. This role goes beyond writing code — you’ll help shape the architecture, lead entire backend team if needed, and be deeply involved in designing scalable systems almost from scratch.


This is a high-impact opportunity to work directly with the founders and play a pivotal role in building the core product. If you’re looking to grow alongside a fast-growing startup, take complete ownership, and influence the direction of the technology and product, this role is made for you.


Why Clink?

Clink is a fast-growing product startup building innovative solutions in the food-tech space. We’re on a mission to revolutionize how restaurants connect with customers and manage offers seamlessly. Our platform is a customer-facing app that needs to scale rapidly as we grow. We also aim to leverage Generative AI to enhance user experiences and drive personalization at scale.


Key Responsibilities:

  • Design, develop, and completely own high-performance backend systems.
  • Architect scalable, secure, and efficient system designs.
  • Own database schema design and optimization for performance and reliability.
  • Collaborate closely with frontend engineers, product managers, and designers.
  • Guide and mentor junior team members .
  • Explore and experiment with Generative AI capabilities for product innovation.
  • Participate in code reviews and ensure high engineering standards.

Must-Have Skills:

  • 2–4 years of experience in backend development at a product-based company.
  • Strong expertise in database design and system architecture.
  • Hands-on experience building multiple production-grade applications.
  • Solid programming fundamentals and logical problem-solving skills.
  • Experience with Python or Ruby on Rails (one is mandatory).
  • Experience integrating third-party APIs and services.

Good-to-Have Skills:

  • Familiarity with Generative AI tools, APIs, or projects.
  • Contributions to open-source projects or personal side projects.
  • Exposure to frontend basics (React, Vue, or similar) is a plus.
  • Exposure to containerization, cloud deployment, or CI/CD pipelines.

What We’re Looking For:

  • Extremely high aptitude and ability to solve tough technical problems.
  • Passion for building products from scratch and shipping fast.
  • hacker mindset — someone who builds cool stuff even in their spare time.
  • Team player who can lead when required and work independently when needed.


Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
5 - 8 yrs
₹15L - ₹23L / yr
skill iconPython
FastAPI
skill iconDjango
skill iconReact.js
skill iconAmazon Web Services (AWS)

About The Role

We are seeking a Full Stack Cloud Engineer with strong hands-on experience in Python (FastAPI), React.js, and AWS Serverless architecture to lead and contribute to the design and development of scalable, modern web applications. The ideal candidate will bring both technical depth and leadership skills, mentoring a small team of developers while remaining actively involved in coding, architectural decisions, and deployment.


You will play a key role in building and optimizing cloud-native, serverless applications using AWS services, integrating front-end and back-end components, and ensuring reliability, scalability, and performance.

 

Responsibilities


Technical Leadership

  • Lead and mentor a small team of engineers, ensuring adherence to coding standards and best practices.
  • Drive architectural and design decisions aligned with scalability, performance, and maintainability.
  • Conduct code reviews, guide junior developers, and foster a collaborative engineering culture.

Backend Development

  • Design, build, and maintain RESTful APIs using FastAPI or Flask.
  • Develop and deploy serverless microservices on AWS Lambda using AWS SAM.
  • Work with relational databases (PostgreSQL/MySQL) and optimize SQL queries.
  • Manage asynchronous task queues with Celery and Redis/SQS.

Frontend Development

  • Build and maintain responsive, scalable front-end applications using React.js.
  • Implement reusable components using Redux, Hooks, and TypeScript.
  • Integrate APIs and optimize front-end performance, accessibility, and security.

AWS Cloud & DevOps

  • Architect and deploy applications using AWS SAM, Lambda, Glue, Cognito, AppSync, and Amplify.
  • Good-to-have) Experience designing and consuming GraphQL APIs via AWS AppSync.
  • Implement CI/CD pipelines and manage deployments via Amplify, CodePipeline, or equivalent.
  • Ensure proper authentication, authorization, and identity management with Cognito.
  • Use Gitlabs/Devops, Docker and AWS ECS/EKS for containerized deployments where required.


Preferred Skills

  • Experience with GraphQL (AppSync) and data integrations.
  • Exposure to container orchestration (ECS/EKS).
  • AWS Certification (e.g., AWS Developer or Architect Associate) is a plus.


Soft Skills

  • Strong communication and leadership abilities.
  • Ability to mentor and motivate team members.
  • Problem-solving mindset with attention to detail and scalability.
  • Passion for continuous learning and improvement.


About Oneture Technologies

 

Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities.

 

Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions - from ideation,

project inception, planning through deployment to ongoing support and maintenance.

 

Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for

them.

Read more
Knowmax
Bhawna Attri
Posted by Bhawna Attri
Gurugram
2 - 5 yrs
₹5L - ₹8L / yr
Selenium
Appium
skill iconJava
skill iconPython
cicd
+2 more

Job Summary

We seek a motivated and detail-oriented QA Automation Tester to join our QA team. You will be responsible for writing and executing automated test cases, identifying bugs, and supporting the overall testing efforts to ensure software quality. This role offers an excellent opportunity to grow in test automation while working closely with experienced QA professionals and development teams.


Key Responsibilities

  • Understand functional and non-functional requirements to design relevant test cases.
  • Develop, execute, and maintain automated test scripts under the guidance of senior QA team members.
  • Use automation tools such as Selenium, Appium, or similar frameworks.
  • Identify, document, and track defects, and work with developers to ensure timely resolution.
  • Assist in creating and managing test data and environments.
  • Collaborate with cross-functional teams, including developers, product managers, and senior QA
  • Participate in Agile ceremonies such as stand-ups, sprint reviews, and retrospectives.
  • Support manual testing activities as needed.
  • Continuously learn and adapt to new testing tools, technologies, and best practices.
  • Ensure test coverage, consistency, and accuracy across the software lifecycle.


Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 1 to 3 years of experience in QA automation testing.
  • Basic experience with test automation tools like Selenium, Appium, or similar.
  • Familiarity with one or more programming/scripting languages (e.g., Java, Python, or JavaScript).
  • Understanding of QA processes, software testing methodologies, and defect life cycle.
  • Exposure to Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Good communication skills and a collaborative mindset.
  • Eagerness to learn and grow in the test automation domain.


Preferred Qualifications (Nice to Have)

  • Hands-on experience with test management or bug tracking tools (e.g., JIRA, TestRail).
  • Exposure to version control systems like Git.
  • Basic understanding of CI/CD concepts and tools like Jenkins.
  • Relevant certifications (e.g., ISTQB Foundation, Selenium WebDriver certification).


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹18L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconJava
skill iconPython
TypeScript
skill iconReact.js
+1 more

We are looking for a Full Stack Developer with strong experience in TypeScript-based frontend frameworks (Svelte/React/Angular) and at least two backend stacks (FastAPI, Python, PHP, Java). You’ll work across the full development cycle, from designing architecture to deploying scalable applications.


Responsibilities:

  • Collaborate with product managers and engineers to design and build scalable solutions
  • Build robust, responsive front-end applications in TypeScript
  • Develop well-structured back-end services and APIs
  • Manage databases and integrations for performance and security
  • Troubleshoot, debug, and optimize applications
  • Ensure mobile responsiveness and data protection standards
  • Document code and processes clearly


Technical Skills:

  • Proficiency with TypeScript and modern frontend frameworks (Svelte, React, Angular)
  • Hands-on experience with any 2 backend stacks (FastAPI, Python, PHP, Java)
  • Familiarity with databases (PostgreSQL, MySQL, MongoDB) and web servers (Apache)
  • Experience developing APIs and integrating with third-party services


Experience & Education:

  • B.Tech/BE in Computer Science or related field
  • Minimum 2 years of experience as a Full Stack Developer

Soft Skills:


  • Strong problem-solving and analytical skills
  • Clear communication and teamwork abilities
  • Attention to detail and an ownership mindset


Read more
Orbia

at Orbia

4 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Pune
12yrs+
Upto ₹55L / yr (Varies
)
Windows Azure
Data Structures
skill iconData Analytics
PowerBI
skill iconPython
+1 more

Knowledge & Experience:

  • Providing Technical leadership and guidance to Teams In Data and Analytics engineering solutions and platforms 
  • Strong problem-solving skills and the ability to translate business requirements into actionable data science solutions. 
  • Excellent communication skills, with the ability to effectively convey complex ideas to technical and non-technical stakeholders. 
  • Strong team player with excellent interpersonal and collaboration skills. 
  • Ability to manage multiple projects simultaneously and deliver high-quality results within specified timelines. 
  • Proven ability to work collaboratively in a global, matrixed environment and engage effectively with global stakeholders across multiple business groups.


Relevant Experience:

  • 12+ years of IT experience in delivering medium-to-large data engineering, and analytics solutions
  • Min. 4 years of Experience working with Azure Databricks, Azure Data Factory, Azure Data Lake, Azure SQL DW, Azure SQL, Power BI, SAC and other BI, data visualization and exploration tools
  • Deep understanding of master data management & governance concepts and methodologies
  • Experience in Data Modelling & Source System Analysis 
  • Familiarity with PySpark 
  • Mastery of SQL 
  • Experience with Python programming language used for Data Engineering purpose. 
  • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows. 
  • Preferred but not required - 
  • Microsoft Certified: Azure Data Engineer Associate 
  • Experience in preparing data for Data Science and Machine Learning 
  • Knowledge of Jupyter Notebooks or Databricks Notebooks for Python development 
  • Power BI Dataset Development and Dax 
  • Power BI Report development 
  • Exposure to AI services in Azure and Agentic Analytics solutions
Read more
Orbia

at Orbia

4 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Pune
9yrs+
Upto ₹44L / yr (Varies
)
Windows Azure
Data Structures
SACS
PowerBI
skill iconPython
+1 more

Knowledge & Experience:

  • Providing Technical leadership and guidance to Teams In Data and Analytics engineering solutions and platforms 
  • Strong problem-solving skills and the ability to translate business requirements into actionable data science solutions. 
  • Excellent communication skills, with the ability to effectively convey complex ideas to technical and non-technical stakeholders. 
  • Strong team player with excellent interpersonal and collaboration skills. 
  • Ability to manage multiple projects simultaneously and deliver high-quality results within specified timelines. 
  • Proven ability to work collaboratively in a global, matrixed environment and engage effectively with global stakeholders across multiple business groups.

Relevant Experience:

  • 10 to 12 years of IT experience in delivering medium-to-large data engineering and analytics solutions
  • Min. 4 years of Experience working with Azure Databricks, Azure Data Factory, Azure Data Lake, Azure SQL DW, Power BI, SAC and other BI, data visualization and exploration tools
  • Experience in Data Modelling & Source System Analysis 
  • Familiarity with PySpark 
  • Mastery of SQL 
  • Experience with Python programming language used for data Engineering purpose
  • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows
  • Experience in data visualization/exploration tools 

Will be considered as an advantage but are not required: 

  • Microsoft Certified: Azure Data Engineer Associate 
  • Experience in preparing data for Data Science and Machine Learning 
  • Knowledge of Jupyter Notebooks or Databricks Notebooks for Python development 
  • Power BI Dataset Development and Dax 
  • Power BI Report development 
  • Exposure to AI services in Azure and Agentic Analytics solutions
Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹20L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
skill iconAngular (2+)

Full Stack Engineer


Position Description

Responsibilities

• Take design mockups provided by UX/UI designers and translate them into web pages or applications using HTML and CSS. Ensure that the design is faithfully replicated in the final product.

• Develop enabling frameworks and application E2E and enhance with data analytics and AI enablement

• Ensure effective Design, Development, Validation and Support activities in line with the Customer needs, architectural requirements, and ABB Standards.

• Support ABB business units through consulting engagements.

• Develop and implement machine learning models to solve specific business problems, such as predictive analytics, classification, and recommendation systems

• Perform exploratory data analysis, clean and preprocess data, and identify trends and patterns.

• Evaluate the performance of machine learning models and fine-tune them for optimal results.

• Create informative and visually appealing data visualizations to communicate findings and insights to non-technical stakeholders.

• Conduct statistical analysis, hypothesis testing, and A/B testing to support decision-making processes.

• Define the solution, Project plan, identifying and allocation of team members, project tracking; Work with data engineers to integrate, transform, and store data from various sources.

• Collaborate with cross-functional teams, including business analysts, data engineers, and domain experts, to understand business objectives and develop data science solutions.

• Prepare clear and concise reports and documentation to communicate results and methodologies.

• Stay updated with the latest data science and machine learning trends and techniques.

• Familiarity with ML Model Deployment as REST APIs.


Background

• Engineering graduate / Masters degree with rich exposure to Data science, from a reputed institution

• Create responsive web designs that adapt to different screen sizes and devices using media queries and responsive design techniques.

• Write and maintain JavaScript code to add interactivity and dynamic functionality to web pages. This may include user input handling, form validation, and basic animations.

• Familiarity with front-end JavaScript libraries and frameworks such as React, Angular, or Vue.js. Depending on the projects, you may be responsible for working within these frameworks

• Atleast 6+ years experience in AI ML concepts, Python (preferred), prefer knowledge in deep learning frameworks like PyTorch and TensorFlow

• Domain knowledge of Manufacturing/ process Industries, Physics and first principle based analysis

• Analytical thinking for translating data into meaningful insight and could be consumed by ML Model for Training and Prediction.

• Should be able to deploy Model using Cloud services like Azure Databricks or Azure ML Studio. Familiarity with technologies like Docker, Kubernetes and MLflow is good to have.

• Agile development of customer centric prototypes or ‘Proof of Concepts’ for focused digital solutions

• Good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.

Read more
Remote only
6 - 15 yrs
₹10L - ₹30L / yr
skill iconNextJs (Next.js)
skill iconFlutter
FastAPI
skill iconAmazon Web Services (AWS)
TypeScript
+8 more

Mission

Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.

Responsibilities

  • Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
  • Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
  • Integrate StripeMaps, analytics; enforce accessibility and performance baselines.
  • Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
  • Partner with Mobile and AI engineers on API/tool schemas and developer experience.

Requirements

  • 6–10+ years; expert TypeScript, strong Python.
  • Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
  • Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
  • Practical CI/CD and observability (logs/metrics/traces).

Nice-to-haves

  • OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.

Key Outcomes (ongoing)

  • Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.


Read more
Beyond Seek Technologies Pvt Ltd
Remote only
4 - 7 yrs
₹14L - ₹25L / yr
Process automation
RPA
skill iconPython
AWS Lambda
AWS Simple Queuing Service (SQS)
+6 more

Responsibilities:

  • Develop and maintain RPA workflows using Selenium, AWS Lambda, and message queues (SQS/RabbitMQ/Kafka).
  • Build and evolve internal automation frameworks, reusable libraries, and CI-integrated test suites to accelerate developer productivity.
  • Develop comprehensive test strategies (unit, integration, end-to-end), optimize performance, handle exceptions, and ensure high reliability of automation scripts.
  • Monitor automation health and maintain dashboards/logging via cloud tools (CloudWatch, ELK, etc. ).
  • Champion automation standards workshops, write documentation, and coach other engineers on test-driven development and behavior-driven automation.


Requirements:

  • 4-5 years of experience in automation engineering with deep/hands-on experience in Python and modern browser automation frameworks (Selenium/PythonRPA).
  • Solid background with desktop-automation solutions (UiPath, PythonRPA) and scripting legacy applications.
  • Strong debugging skills, with an eye for edge cases and race conditions in distributed, asynchronous systems.
  • Hands-on experience with AWS services like Lambda, S3 and API Gateway.
  • Familiarity with REST APIs, webhooks, and queue-based async processing.
  • Experience integrating with third-party platforms or enterprise systems.
  • Ability to translate business workflows into technical automation logic.
  • Able to evangelize automation best practices, present complex ideas clearly, and drive cross-team alignment.


Nice to Have:

  • Experience with RPA frameworks (UiPath, BluePrism, etc. ).
  • Familiarity with building LLM-based workflows (LangChain, LlamaIndex) or custom agent loops to automate cognitive tasks.
  • Exposure to automotive dealer management or VMS platforms.
  • Understanding of cloud security and IAM practices.
Read more
Blutic India Pvt Ltd
Remote only
10 - 15 yrs
₹18L - ₹20L / yr
SQL,
ETL
skill iconPython
REDSHIFT
SNOWFLAKE
+2 more

We are looking for a Senior Data Engineer/Developer with over 10+ years of experience to be a key contributor to our data-driven initiatives. This role is 'primarily' focused on development, involving the design and construction of data models, writing complex SQL, developing ETL processes, and contributing to our data architecture. The 'secondary' focus involves applying your deep database knowledge to performance tuning, query optimization, and collaborating on DBA-related support activities on AWS environments (RDS, Redshift, SQL Server, Snowflake). The ideal candidate is a builder who understands how to get the most out of a database platform.Key Responsibilities Data Development & Engineering (Primary Focus):

·      Design & Development: Architect, design, and implement efficient, scalable, and sustainable data models and database schemas.

·      Advanced SQL Programming: Write sophisticated, highly-optimized SQL code for complex business logic, data retrieval, and manipulation within MySQL RDS, SQL Server, and AWS Redshift.

·      Data Pipeline & ETL Development: Collaborate with engineering teams to design, build, and maintain robust ETL processes and data pipeline integrations.

·      Automation & Scripting: Utilize Python as a primary tool for scripting, automation, data processing, and enhancing platform capabilities.

·      CI/CD Ownership: Own and enhance CI/CD pipelines for database deployments, schema migrations, and automated testing, ensuring smooth and reliable releases.

·      Solution Collaboration: Collaborate with application engineering teams to deliver scalable, secure, and performing data solutions and APIs.

Database Administration & Optimization (Secondary Focus):

·      Performance Tuning: Proactively identify and resolve performance bottlenecks, including slow-running queries, indexing strategies, and resource contention. Use tools like SQL Sentry for deep diagnostics.

·      Operational Support: Perform essential DBA activities such as supporting backup/recovery strategies, contributing to high-availability designs, and assisting with patch management plans.

·      AWS Data Management: Administer and optimize AWS RDS and Redshift instances, leveraging knowledge of DB Clusters (Read Replicas, Multi-AZ) for development and testing.

·      Monitoring & Reliability: Monitor data platform health using Amazon CloudWatch, xMatters, and other tools to ensure high availability and reliability, tackling issues as they arise.

Architecture & Mentorship:

·      Contribute to architectural decisions and infrastructure modernization efforts on AWS and Snowflake.

·      Provide technical guidance and mentorship to other developers on best practices in database design and SQL.

Required Qualifications & Experience

·      10+ years of experience in a data engineering, database development, or software development role with a heavy focus on data.

·      Expert-level SQL programming skills with extensive experience in MySQL (AWS RDS) and Microsoft SQL Server ,Redshift and snowflake

·      Strong development skills in Python for Lambda / glue development on

·      Hands-on experience designing and optimizing for AWS Redshift

·      Proven experience in performance tuning and optimization of complex queries and data models.

·      Solid understanding of ETL concepts, processes, and tools.

·      Experience with CI/CD tools (e.g., Bitbucket Pipelines, Jenkins) for automating database deployments.

·      Experience managing production data environments and troubleshooting platform issues.

·      Excellent written and verbal communication skills, with the ability to work effectively in a remote team.

Preferred Skills (Nice-to-Have)

·      Understanding of Data Governance

·      Experience with Snowflake, particularly around architecture, agents, data sharing, security, and performance.

·      Knowledge of infrastructure-as-code (IaC) tools like CloudFormation.

Work Schedule & Conditions

·      This is a 100% remote, long-term opportunity.

·      The standard work week will be Wednesday through Sunday.

·      Your designated days off will be Monday and Tuesday.

·      You must be willing to work partially overlapping hours with Eastern Standard Time (EST) to ensure collaboration with the team and support during core business hours

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
skill iconJava
skill iconRuby
Oracle NoSQL Database
+5 more

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Remote only
6 - 8 yrs
₹6L - ₹8L / yr
skill iconPython

Job Title: Python Backend Developer

Experience: 6+ Years

Location: Remote


Job Summary:

We are looking for an experienced Python Backend Developer to design, build, and maintain scalable, high-performance backend systems. The ideal candidate should have strong expertise in Python frameworks, database design, API development, and cloud-based architectures. You will collaborate closely with front-end developers, DevOps engineers, and product teams to deliver robust, secure, and efficient backend solutions.

Key Responsibilities:

  • Design, develop, and maintain scalable and efficient backend services using Python.
  • Build RESTful and GraphQL APIs for front-end and third-party integrations.
  • Optimize application performance and ensure system reliability and scalability.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Develop and maintain database schemas, stored procedures, and data models.
  • Implement security and data protection best practices.
  • Write clean, maintainable, and well-documented code following coding standards.
  • Conduct code reviews, troubleshoot issues, and provide technical mentorship to junior developers.
  • Integrate applications with cloud services (AWS / Azure / GCP) and CI/CD pipelines.
  • Monitor system performance and handle production deployments.

Required Skills and Qualifications:

  • 6+ years of hands-on experience in backend development using Python.
  • Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
  • Experience in RESTful API and microservices architecture.
  • Proficiency in SQL and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Redis).
  • Strong understanding of OOP concepts, design patterns, and asynchronous programming.
  • Hands-on experience with Docker, Kubernetes, and CI/CD tools (Jenkins, GitHub Actions, etc.).
  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Proficient in version control systems like Git.
  • Solid understanding of unit testing, integration testing, and test automation frameworks (PyTest, Unittest).
  • Familiarity with message brokers (RabbitMQ, Kafka, Celery) is a plus.
  • Knowledge of containerized deployments and serverless architecture is an advantage.

Education:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

Preferred Qualifications:

  • Experience working in Agile/Scrum development environments.
  • Exposure to DevOps tools and monitoring systems (Prometheus, Grafana, ELK).
  • Contribution to open-source projects or community participation.

Soft Skills:

  • Excellent problem-solving and analytical skills.
  • Strong communication and teamwork abilities.
  • Proactive attitude with attention to detail.
  • Ability to work independently and mentor junior team members.
Read more
KGISL MICROCOLLEGE
skillryt hr
Posted by skillryt hr
Thrissur, Ernakulam
1 - 5 yrs
₹1L - ₹5L / yr
skill iconPython

Job Title: Freelance Python Trainer (Workshop-Based)

Location: Thrissur & Ernakulam, Kerala

Duration: Few days (short-term workshops)

Type: Freelance / Contract

Compensation: Hourly pay (based on number of workshop hours)

About the Role

We are looking for an enthusiastic and skilled Python Trainer to conduct interactive, hands-on workshops for college students in Thrissur and Ernakulam. The sessions will introduce students to Python programming concepts, practical applications, and project-based learning.

Key Responsibilities

  • Conduct engaging and effective Python training sessions (basic to intermediate level) for college students.
  • Deliver content in a clear and interactive manner with a focus on practical implementation and hands-on exercises.
  • Prepare and/or customize training materials, examples, and small projects relevant to the workshop.
  • Clarify doubts, guide participants during coding sessions, and encourage active participation.
  • Provide feedback to students and summarize key takeaways after each session.

Required Skills & Qualifications

  • Strong proficiency in Python programming (core Python, data structures, libraries such as NumPy/Pandas, or basic automation).
  • Prior experience as a trainer, mentor, or educator (preferred, but not mandatory).
  • Excellent communication and presentation skills.
  • Ability to engage and motivate students with varying levels of programming knowledge.
  • Availability to travel to colleges in Thrissur and Ernakulam for in-person workshops.

Compensation

  • Pay: Based on the number of training hours conducted.
  • Additional travel allowance may be provided (if applicable).
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
skill iconAmazon Web Services (AWS)
skill iconPython
DevOps
CI/CD

Job Description.

 

1.      Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)

2.      Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)

3.      Python or Java or JS basic knowledge (Python Preferred)

4.      Monitoring tools

5.      Alerting tools

6.      Logging tools

7.      CICD

8.      Docker/containers/(k8s/terraform nice to have)

9.      Experience working on distributed applications with multiple services

10.  Incident management

11.  DB experience in terms of basic queries

12.  Understanding of performance analysis of applications

13.  Idea about data pipelines would be nice to have

14.  Snowflake querying knowledge: nice to have

 

The person should be able to :

Monitor system issues

Create strategies to detect and address issues

Implement automated systems to troubleshoot and resolve issues.

Write and review post-mortems

Manage infrastructure for multiple product teams

Collaborate with product engineering teams to ensure best practices are being followed

Read more
evoqins
Sethulakshmi Manoj
Posted by Sethulakshmi Manoj
Kochi (Cochin)
2 - 4 yrs
₹3L - ₹7L / yr
skill iconPython
FastAPI
skill iconAmazon Web Services (AWS)
RESTful APIs
SQL
+2 more

Company Description

Evoqins is an end-to-end digital product development team focused on maximizing the scalability and reliability of global businesses. We specialize in a wide range of domains including fintech, banking, e-commerce, supply chain, enterprises, logistics, healthcare, and hospitality. With ISO 9001 certification and a 4.9-star Google rating, we are proud to have 120+ satisfied customers and an 87% customer retention rate. Our services include UX/UI design, mobile app development, web app development, custom software development, and team augmentation. 


Role Description

We are looking for a passionate Senior Backend Developer.  You will be responsible for designing, developing, and maintaining scalable backend services and APIs using Python. 

  • Role: Senor Backend Developer
  • Location: Kochi
  • Employment Type: Full Time

Key Responsibilities

  • Design, develop, and maintain scalable Python-based applications and APIs.
  • Build and optimize backend systems using FastAPI/Django/Flask.
  • Work with PostgreSQL/MySQL databases, ensuring efficiency and reliability.
  • Develop and maintain REST APIs (GraphQL experience is a plus).
  • Collaborate using Git-based version control.
  • Deploy and manage applications on AWS cloud infrastructure.
  • Ensure best practices in performance optimization, testing, and security.

Required Skills & Experience

  • 2– 5 years of hands-on Python development experience.
  • Experience in Fintech projects is an advantage
  • Proven experience in FastAPI and REST API development.
  • Strong database skills with PostgreSQL (preferred) and MySQL.
  • Practical exposure to API integrations and third-party services.
  • Experience deploying and maintaining applications in production.
  • Familiarity with AWS cloud services.


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
3 - 5 yrs
₹14L - ₹22L / yr
skill iconPython
Artificial Intelligence (AI)
Prompt engineering
skill iconJavascript
Open-source LLMs

We’re partnering with a fast-growing AI-first enterprise transforming how organizations handle documents, decisioning, and workflows — starting with BFSI and healthcare. Their platforms are redefining document intelligence, credit analysis, and underwriting automation using cutting-edge AI and human-in-the-loop systems.


As an AI Engineer, you’ll work with a high-caliber engineering team building next-gen AI systems that:

  • Power robust APIs and platforms used by underwriters, analysts, and financial institutions.
  • Build and integrate GenAI-powered agents.
  • Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.


Key Responsibilities

  • Build and optimize ML/DL models for document understanding, classification, and summarization.
  • Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
  • Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
  • Package and deploy models as REST APIs or microservices in production environments.
  • Collaborate with engineering teams to integrate models into existing products and workflows.
  • Monitor, retrain, and fine-tune models to ensure reliability and performance.
  • Stay updated on emerging AI frameworks, architectures, and open-source tools; propose system improvements.


Required Skills & Experience

  • 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and deployment.
  • Strong Python proficiency (NumPy, Pandas, scikit-learn, PyTorch, TensorFlow).
  • Solid understanding of transformers, embeddings, and NLP pipelines.
  • Experience with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
  • Exposure to OCR, document parsing, and unstructured text analytics.
  • Familiarity with FastAPI/Flask, Docker, and cloud environments (AWS/GCP/Azure).
  • Working knowledge of CI/CD pipelines, model validation, and evaluation workflows.
  • Strong problem-solving skills, structured thinking, and production-quality coding practices.


Bonus Skills

  • Domain exposure to Fintech/BFSI or Healthcare (e.g., credit underwriting, claims automation, KYC).
  • Experience with vector databases (FAISS, Pinecone, Weaviate) and semantic search.
  • Knowledge of MLOps tools (MLflow, Airflow, Kubeflow).
  • Experience integrating GenAI into SaaS or enterprise products.


Education

  • B.Tech / M.Tech / MS in Computer Science, Data Science, or related field.
  • (Equivalent hands-on experience will also be considered.)


Why Join

  • Build AI systems from prototype to production for live enterprise use.
  • Work with a senior AI and product team that values ownership, innovation, and impact.
  • Exposure to LLMs, GenAI, and Document AI in large-scale enterprise environments.
  • Competitive compensation, career growth, and a flexible work culture.


Read more
Jaipur
6 - 12 yrs
₹5L - ₹17L / yr
Data Operations
Data Visualization
Data collection
Data validation
Data integration
+18 more

About the Role

We are seeking an experienced Data Operations Lead to oversee and manage the data operations team responsible for data analysis, query development, and data-driven initiatives. This role plays a key part in ensuring the effective management, organization, and delivery of high-quality data across projects while driving process efficiency, data accuracy, and business impact.


Key Responsibilities

  • Lead and mentor the Data Operations team focused on data collection, enrichment, validation, and delivery.
  • Define and monitor data quality metrics, identify discrepancies, and implement process improvements.
  • Collaborate with Engineering for data integration, automation, and scalability initiatives.
  • Partner with Product and Business teams to ensure data alignment with strategic objectives.
  • Manage vendor relationships for external data sources and enrichment platforms.
  • Promote automation using tools such as SQL, Python, and BI platforms.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Analytics, Information Systems, or related field.
  • 6–12 years of experience in data operations, management, or analytics, with at least 4 years in a leadership capacity.
  • Strong understanding of data governance, ETL processes, and quality control frameworks.
  • Proficiency with SQL, Excel/Google Sheets, and data visualization tools.
  • Exposure to automation and scripting (Python preferred).
  • Excellent communication, leadership, and project management skills.
  • Proven ability to manage teams and maintain high data quality under tight deadlines.


Preferred Skills

  • Experience in SaaS, B2B data, or lead intelligence environments.
  • Familiarity with GDPR, CCPA, and data privacy compliance.
  • Ability to work effectively in cross-functional and global teams.


About the Company

We are a leading revenue intelligence platform that combines advanced automation with a dedicated research team to achieve industry-leading data accuracy. Our platform offers millions of verified contact and company records, continuously re-verified to ensure reliability. With a commitment to quality, scalability, and exceptional customer experience, we empower organizations to make smarter, data-driven decisions.

We pride ourselves on a diverse, growth-oriented workplace that values continuous learning, collaboration, and excellence. Our team members enjoy competitive benefits, including paid leaves, bonuses, incentives, medical coverage, and training opportunities.

Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
Upto ₹35L / yr (Varies
)
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+13 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 7 yrs
₹25L - ₹50L / yr
Microservices
API
Cloud Computing
skill iconJava
skill iconPython
+18 more

ROLES AND RESPONSIBILITIES:

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


KEY RESPONSIBILITIES:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


IDEAL CANDIDATE:

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.



PREFERRED QUALIFICATIONS:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.
Read more
appscrip

at appscrip

2 recruiters
Nilam Surti
Posted by Nilam Surti
Bengaluru (Bangalore)
0 - 0 yrs
₹1.2L - ₹4L / yr
skill iconPython
skill iconMongoDB
FastAPI
Artificial Intelligence (AI)
skill iconGo Programming (Golang)

The requirements are as follows:


1) Familiar with the the Django REST API Framework.


2) Experience with the FAST API framework will be a plus


3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )


4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus


5) Experience with any ML library will be a plus.


6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.


7) Familiar with basic code patterns like MVC.


8) Grasp on basic data structures.


You can contact me on nine three one six one two zero one three two

Read more
Intensity Global Technologies

at Intensity Global Technologies

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Delhi
3yrs+
Upto ₹10L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
skill iconMachine Learning (ML)
skill iconPython
PyTorch
+1 more

Job Summary:

We are seeking a skilled and forward-thinking Cloud AI Professional to join our technology team. The ideal candidate will have expertise in designing, deploying, and managing artificial intelligence and machine learning solutions in cloud environments (AWS, Azure, or Google Cloud). You will work at the intersection of cloud computing and AI, helping to build scalable, secure, and high-performance AI-driven applications and services.


Key Responsibilities:

  • Design, develop, and deploy AI/ML models in cloud environments (AWS, GCP, Azure).
  • Build and manage end-to-end ML pipelines using cloud-native tools (e.g., SageMaker, Vertex AI, Azure ML).
  • Collaborate with data scientists, engineers, and stakeholders to define AI use cases and deliver solutions.
  • Automate model training, testing, and deployment using MLOps practices.
  • Optimize performance and cost of AI/ML workloads in the cloud.
  • Ensure security, compliance, and scalability of deployed AI services.
  • Monitor model performance in production and retrain models as needed.
  • Stay current with new developments in AI/ML and cloud technologies.


Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience in AI/ML and cloud computing.
  • Hands-on experience with cloud platforms (AWS, GCP, or Azure).
  • Proficient in Python, TensorFlow, PyTorch, or similar frameworks.
  • Strong understanding of MLOps tools and CI/CD for machine learning.
  • Experience with containerization (Docker, Kubernetes).
  • Familiarity with cloud-native data services (e.g., BigQuery, S3, Cosmos DB).


Preferred Qualifications:

  • Certifications in Cloud (e.g., AWS Certified Machine Learning, Google Cloud Professional ML Engineer).
  • Experience with generative AI, LLMs, or real-time inferencing.
  • Knowledge of data governance and ethical AI practices.
  • Experience with REST APIs and microservices architecture.


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.
Read more
IAI solution
Anajli Kanojiya
Posted by Anajli Kanojiya
Bengaluru (Bangalore)
5 - 8 yrs
₹30L - ₹35L / yr
skill iconPython
skill iconReact.js
skill iconDocker
skill iconMongoDB

Location: Bengaluru, India

Experience: 5 to 8 Years

Employment Type: Full-time


About the Role

We’re looking for an experienced Full Stack Developer with strong expertise across modern frontend frameworks, scalable backend systems, and cloud-native DevOps environments.

The ideal candidate will play a key role in designing, developing, and deploying end-to-end solutions that power high-performance, data-driven applications.


Key Responsibilities

  • Design, develop, and maintain scalable frontend applications using React.js and Next.js.
  • Build robust backend services and APIs using FastAPI (Python), Node.js, or Java.
  • Implement database design, queries, and optimization using PostgreSQL, MongoDB, and Redis.
  • Develop, test, and deploy cloud-native solutions on Azure (preferred) or AWS.
  • Manage containerized environments using Docker and Kubernetes.
  • Automate deployments and workflows with Terraform, GitHub Actions, or Azure DevOps.
  • Ensure application security, performance, and reliability across the stack.
  • Collaborate closely with cross-functional teams (designers, product managers, data engineers) to deliver quality software.


Required Skills

Frontend: Next.js, React.js, TypeScript, HTML, CSS, Tailwind (preferred)

Backend: Python (FastAPI), Node.js, Java, REST APIs, GraphQL (optional)

Databases: PostgreSQL, MongoDB, Redis

Cloud & DevOps: Azure (preferred), AWS, Docker, Kubernetes, Terraform

CI/CD: GitHub Actions, Azure DevOps, Jenkins (nice to have)

Version Control: Git, GitHub/GitLab


Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 4+ years of hands-on experience in full-stack development.
  • Strong problem-solving skills and ability to architect scalable solutions.
  • Familiarity with Agile development and code review processes.
  • Excellent communication and collaboration abilities.


Nice to Have

  • Experience with microservices architecture.
  • Exposure to API security and authentication (OAuth2, JWT).
  • Experience in setting up observability tools (Grafana, Prometheus, etc.).


Compensation

Competitive salary based on experience and technical proficiency.

Read more
Metron Security Private Limited
Prathamesh Shinde
Posted by Prathamesh Shinde
Bengaluru (Bangalore), Pune
2 - 5 yrs
₹4L - ₹10L / yr
skill iconPython

Job Description:


We are looking for a skilled Backend Developer with 2–5 years of experience in software development, specializing in Python and/or Golang. If you have strong programming skills, enjoy solving problems, and want to work on secure and scalable systems, we'd love to hear from you!


Location - Pune, Baner.

Interview Rounds - In Office


Key Responsibilities:

Design, build, and maintain efficient, reusable, and reliable backend services using Python and/or Golang

Develop and maintain clean and scalable code following best practices

Apply Object-Oriented Programming (OOP) concepts in real-world development

Collaborate with front-end developers, QA, and other team members to deliver high-quality features

Debug, optimize, and improve existing systems and codebase

Participate in code reviews and team discussions

Work in an Agile/Scrum development environment


Required Skills: Strong experience in Python or Golang (working knowledge of both is a plus)


Good understanding of OOP principles

Familiarity with RESTful APIs and back-end frameworks

Experience with databases (SQL or NoSQL)

Excellent problem-solving and debugging skills

Strong communication and teamwork abilities


Good to Have:

Prior experience in the security industry

Familiarity with cloud platforms like AWS, Azure, or GCP

Knowledge of Docker, Kubernetes, or CI/CD tools

Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹14L / yr (Varies
)
skill iconPython
SQL
Statistical Analysis
A/B Testing
MS-Excel
+5 more

Business Analyst

Domain: Product / Fintech / Credit Cards


Mandatory Technical Skill Set

  • Previous experience in a product-based company is mandatory
  • Experience working with credit bureau data (CIBIL, Experian, Equifax, etc.) for customer profiling, credit risk insights, or strategy building
  • Churn analysis and strategy building on subscription management experience
  • BNPL or credit cards growth strategy building experience
  • ML model development experience is a plus
  • Python
  • Statistical analysis and A/B testing
  • Excel
  • SQL
  • Visualization tools such as Redash / Grafana / Tableau / Power BI
  • Bitbucket, GitHub, and other versioning tools


Roles and Responsibilities

  • Work on product integrations, data collection, and data sanity checks
  • Improve product features for sustainable churn management
  • Cohort analysis and strategy building for credit card usage growth
  • Conduct A/B testing for better subscription conversion and offers
  • Monitor key business metrics
  • Track changes and perform impact analysis
Read more
Vola Finance

at Vola Finance

1 video
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Upto ₹20L / yr (Varies
)
skill iconPython
FastAPI
RESTful APIs
GraphQL
skill iconAmazon Web Services (AWS)
+7 more

Python Backend Developer

We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.


Roles & Responsibilities

  • Develop and maintain scalable, secure, and robust backend services using Python
  • Design and implement RESTful APIs and/or GraphQL endpoints
  • Integrate user-facing elements developed by front-end developers with server-side logic
  • Write reusable, testable, and efficient code
  • Optimize components for maximum performance and scalability
  • Collaborate with front-end developers, DevOps engineers, and other team members
  • Troubleshoot and debug applications
  • Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
  • Ensure security and data protection

Mandatory Technical Skill Set

  • Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
  • Python backend development experience
  • Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
  • Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
  • Previous hands-on experience in:
  • EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
  • SQL
Read more
Remote only
2 - 4 yrs
₹4L - ₹8L / yr
skill iconPython
JSON
LLMS
oops
skill iconJava
+4 more

Role Overview

We are seeking a Junior Developer with 1-3 year’s experience with strong foundations in Python, databases, and AI technologies. The ideal candidate will support the development of AI-powered solutions, focusing on LLM integration, prompt engineering, and database-driven workflows. This is a hands-on role with opportunities to learn and grow into advanced AI engineering responsibilities.

Key Responsibilities

  • Develop, test, and maintain Python-based applications and APIs.
  • Design and optimize prompts for Large Language Models (LLMs) to improve accuracy and performance.
  • Work with JSON-based data structures for request/response handling.
  • Integrate and manage PostgreSQL (pgSQL) databases, including writing queries and handling data pipelines.
  • Collaborate with the product and AI teams to implement new features.
  • Debug, troubleshoot, and optimize performance of applications and workflows.
  • Stay updated on advancements in LLMs, AI frameworks, and generative AI tools.

Required Skills & Qualifications

  • Strong knowledge of Python (scripting, APIs, data handling).
  • Basic understanding of Large Language Models (LLMs) and prompt engineering techniques.
  • Experience with JSON data parsing and transformations.
  • Familiarity with PostgreSQL or other relational databases.
  • Ability to write clean, maintainable, and well-documented code.
  • Strong problem-solving skills and eagerness to learn.
  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).

Nice-to-Have (Preferred)

  • Exposure to AI/ML frameworks (e.g., LangChain, Hugging Face, OpenAI APIs).
  • Experience working in startups or fast-paced environments.
  • Familiarity with version control (Git/GitHub) and cloud platforms (AWS, GCP, or Azure).

What We Offer

  • Opportunity to work on cutting-edge AI applications in permitting & compliance.
  • Collaborative, growth-focused, and innovation-driven work culture.
  • Mentorship and learning opportunities in AI/LLM development.
  • Competitive compensation with performance-based growth.


Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
5 - 10 yrs
₹30L - ₹60L / yr
skill iconPython
skill iconData Science
pandas
Scikit-Learn
TensorFlow
+9 more

We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.


Responsibilities

  • Design, build, and deploy scalable machine learning models into production systems.
  • Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
  • Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
  • Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
  • Optimize query performance, storage usage, and data pipelines for efficiency.
  • Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
  • Drive initiatives independently with high ownership and accountability.
  • Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.



Requirements:

  • Minimum 5 years of experience in Data Science or Applied Machine Learning.
  • Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
  • Proven expertise in deploying ML models into production systems.
  • Experience with big data platforms (Hadoop, Spark) and distributed data processing.
  • Hands-on experience with Databricks, Airflow, and AWS EMR.
  • Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
  • Solid understanding of query optimization, storage systems, and data pipelines.
  • Excellent problem-solving skills, with the ability to design scalable solutions.
  • Strong communication and collaboration skills to work in cross-functional teams.



Benefits:

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About Us:

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
Reltio

at Reltio

2 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4 - 7 yrs
Upto ₹42L / yr (Varies
)
skill iconPython
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
Artificial Intelligence (AI)
+5 more

Job Title: Senior AI Engineer

Location: Bengaluru, India – (Hybrid)


About Reltio

At Reltio®, we believe data should fuel business success. Reltio’s AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data.

Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands across multiple industries around the globe rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk, and drive growth.

At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our “Customer First”, we strive to ensure their success. We embrace our differences and are “Better Together” as One Reltio. We “Simplify and Share” our knowledge to remove obstacles for each other. We “Own It”, holding ourselves accountable for our actions and outcomes. Every day, we innovate and evolve so that today is “Always Better Than Yesterday.”

If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence.

Reltio has earned numerous awards and top rankings for our technology, our culture, and our people. Founded on a distributed workforce, Reltio offers flexible work arrangements to help our people manage their personal and professional lives. If you’re ready to work on unrivaled technology as part of a collaborative team on a mission to enable digital transformation with connected data, let’s talk!


Job Summary

As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale.

You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform.

This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment.


Job Duties and Responsibilities

  • Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection.
  • Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines.
  • Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow, with seamless integration into production systems.
  • Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models.
  • Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies.
  • Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias.
  • Actively contribute to research and experimentation efforts, staying updated with the latest AI trends in graph learning, NLP, probabilistic modeling, etc.
  • Document designs and present outcomes to both technical and non-technical stakeholders, fostering transparency and knowledge sharing.


Skills You Must Have

  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, or related field. PhD is a plus.
  • 4+ years of hands-on experience in developing and deploying machine learning models in production environments.
  • Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools (Spark, Kafka, Airflow).
  • Strong understanding of ML fundamentals, including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics.
  • Demonstrated experience working with entity resolution, identity graphs, or data deduplication.
  • Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure).
  • Strong debugging, analytical, and communication skills with a focus on delivery and impact.
  • Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science.


Skills Good to Have

  • Experience with knowledge graphs, graph-based ML, or embedding techniques.
  • Exposure to deep learning applications in data quality, record matching, or information retrieval.
  • Experience building explainable AI solutions in regulated domains.
  • Prior work in SaaS, B2B enterprise platforms, or data infrastructure companies.
Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
3 - 6 yrs
₹4L - ₹6L / yr
Selenium
skill iconJava
skill iconPython
skill iconJenkins
TestNG
+6 more

We’re Hiring – Automation Test Engineer!

We at Versatile Commerce are looking for passionate Automation Testing Professionals to join our growing team!

📍 Location: Gachibowli, Hyderabad (Work from Office)

💼 Experience: 3 – 5 Years

Notice Period: Immediate Joiners Preferred

What we’re looking for:

✅ Strong experience in Selenium / Cypress / Playwright

✅ Proficient in Java / Python / JavaScript

✅ Hands-on with TestNG / JUnit / Maven / Jenkins

✅ Experience in API Automation (Postman / REST Assured)

✅ Good understanding of Agile Testing & Defect Management Tools (JIRA, Zephyr)

Read more
Appiness Interactive Pvt. Ltd.
S Suriya Kumar
Posted by S Suriya Kumar
Bengaluru (Bangalore)
3 - 6 yrs
₹4L - ₹30L / yr
skill iconPython
Retrieval Augmented Generation (RAG)
Vector database
skill iconNodeJS (Node.js)
skill iconPostgreSQL
+5 more

Company Description

Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to fortune-500s. We work closely with our clients to

create a comprehensive soul for their brand in the online world, engaged through multiple

platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out

of the box or tread the un-trodden path in order to deliver the best results for our clients. We

pride ourselves on Practical Creativity where the idea is only as good as the returns it fetches for

our clients.


Role Overview

We are hiring a Founding Backend Engineer to architect and build the core backend

infrastructure for our enterprise AI chat platform. This role involves creating everything from

secure chat APIs and data pipelines to document embeddings, vector search, and RAG

(Retrieval-Augmented Generation) workflows. You will work directly with the CTO and play a

pivotal role in shaping the platform’s architecture, performance, and scalability as we onboard

enterprise customers. This is a high-ownership role where you’ll influence product direction, tech

decisions, and long-term engineering culture.


Key Responsibilities

● Architect, develop, and scale backend systems and APIs powering AI chat and knowledge

retrieval.

● Build data ingestion & processing pipelines for structured and unstructured enterprise

data.

● Implement multi-tenant security, user access control (RBAC), encryption, and

compliance-friendly design.

● Integrate and orchestrate LLMs (OpenAI, Anthropic, etc.) with vector databases

(Pinecone, Qdrant, OpenSearch) to support advanced AI and RAG workflows.

● Ensure platform reliability, performance, and fault tolerance from day one.

● Own end-to-end CI/CD, observability, and deployment pipelines.

● Collaborate directly with leadership on product strategy, architecture, and scaling

roadmap.


Required Skills

● Strong hands-on experience in Python (Django/FastAPI) or Node.js (TypeScript) — Python

preferred.

● Deep understanding of PostgreSQL, Redis, Docker, and modern API design patterns.

● Experience with LLM integration, RAG pipelines, and vector search technologies.

● Strong exposure to cloud platforms (AWS or GCP), CI/CD, and microservice architecture.

● Solid foundation in security best practices — authentication, RBAC, encryption, data

isolation.

● Ability to independently design and deliver high-performance distributed systems.

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 3 yrs
₹7L - ₹15L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconPython
Bash
Powershell
+2 more

Role: DevOps Engineer

Experience: 2–3+ years

Location: Pune

Work Mode: Hybrid (3 days Work from office)

Mandatory Skills:

  • Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
  • Proficiency in scripting languages (Bash, Python, PowerShell)
  • Hands-on experience with containerization (Docker) and container management
  • Proven experience managing infrastructure (On-premise or AWS/VMware)
  • Experience with version control systems (Git/Bitbucket/GitHub)
  • Familiarity with monitoring and logging tools for system performance tracking
  • Knowledge of security best practices and compliance standards
  • Bachelor's degree in Computer Science, Engineering, or related field
  • Willingness to support production issues during odd hours when required

Preferred Qualifications:

  • Certifications in AWS, Docker, or VMware
  • Experience with configuration management tools like Ansible
  • Exposure to Agile and DevOps methodologies
  • Hands-on experience with Virtual Machines and Container orchestration


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort