Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
jk

at jk

mithul m
Posted by mithul m
Saravanmpatti
0 - 3 yrs
₹2.5L - ₹3.5L / yr
skill iconC
skill iconPython
Teaching
Communication Skills

Company Description

Established in the year 2000, KGiSL Educational Institutions is the brainchild of Dr. Ashok Bakthavathsalam, founder and managing director of KG Information Systems Private Limited (KGISL).

Branded as the pioneers in industry-embedded education in the region, KG Group of Institutions at Saravanampatti boasts of being the only campus that fosters the perfect integration of institute – industry infrastructure in an environment of perfecting learning.


Role Description

This is a full-time, on-site role for a Program Mentor located in Coimbatore. The Program Mentor will guide and assist students in their academic and professional development. Responsibilities include mentoring students, conducting workshops, providing individual and group support, assisting with curriculum planning, and tracking mentee progress. The mentor will also collaborate with the faculty and administration to provide insights and feedback on program improvements.


Qualifications

  • Mentor students in C and Python programming concepts and best practices
  • Strong mentoring and coaching skills, with the ability to guide students toward their goals
  • Experience in planning and conducting workshops or training sessions
  • Excellent written and verbal communication skills
  • Ability to collaborate effectively with faculty, students, and stakeholders
  • Organizational and time management skills to track and support student progress
  • Prior experience in an educational or related field is advantageous
  • BE / B. Tech degree is preferred with CSE and IT background


Read more
InMobi

at InMobi

2 products
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore)
2 - 7 yrs
₹25L - ₹60L / yr
skill iconPython
skill iconJava
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
Systems design
+1 more

Job Summary

We're looking for a Senior Backend Software Development Engineer (SDE 2 and SDE 3) with deep technical expertise in building and operating production systems at scale. You'll own entire components/services within microservices and data pipelines that handle ~200K queries per second and process petabytes of data daily. This role demands hands-on coding excellence, strong Low-Level Design (LLD) skills, and complete ownership—from component architecture through infrastructure management to cost optimization. You'll architect solutions, lead technical decisions, and drive operational excellence while shipping code daily.

 

Key Responsibilities:

  • Component Ownership: Own one or more critical components/services end-to-end—responsible for architecture, development, deployment, operations, and evolution
  • Technical Ownership: Own the entire lifecycle of your components—design, implementation, testing, deployment, monitoring, incident response, and continuous improvement
  • Low-Level Design: Create detailed technical designs (LLD) for complex systems—defining data models, APIs, concurrency patterns, and failure modes
  • Hands-on Development: Write production-grade code daily—this is not a purely architectural role; you'll be deep in the codebase
  • Infrastructure Ownership: Own and operate the infrastructure your components run on—capacity planning, scaling, reliability improvements
  • Cost Management: Drive cost optimization for owned components—analyze spending, identify waste, implement efficient architectures
  • Scale & Performance: Build and optimize systems handling 200K+ QPS and petabyte-scale data processing
  • Observability: Design and implement comprehensive monitoring, alerting, and debugging capabilities for owned components
  • Incident Leadership: Lead incident response for your components and related services, conduct post-mortems, drive systemic improvements
  • On-Call Excellence: Participate in on-call rotations and ensure your components are operationally sound (runbooks, alerts, dashboards)
  • Technical Roadmap: Define and drive the technical roadmap for your owned components—balancing feature development, tech debt, and operational improvements
  • Technical Mentorship: Guide junior and mid-level engineers on system design, code quality, and production best practices
  • Cross-functional Collaboration: Work with product, infra, and other eng teams to define requirements and deliver solutions
  • Agile Execution: Break down complex projects, deliver incrementally in daily cadence, iterate based on feedback


Required Qualifications:

  • Experience: 2+ years building and operating backend systems in production environments at scale
  • Education: B.E./B.Tech in Computer Science or equivalent practical experience
  • Component Ownership: Proven track record of owning significant components or services from inception to maturity—demonstrable end-to-end ownership
  • Low-Level Design (LLD): Proven ability to create detailed technical designs—data structures, algorithms, API contracts, concurrency models, failure handling
  • Programming Mastery: Expert-level proficiency in at least one modern language (Go, Python, Java, NodeJS etc.) with track record of writing maintainable, performant production code
  • Databases: Deep hands-on experience with SQL and NoSQL databases—schema design, query optimization, indexing strategies, operational troubleshooting
  • Microservices at Scale: Extensive experience building, deploying, and operating microservices handling high throughput and large data volumes
  • Data Pipelines: Strong background designing and running data processing pipelines at scale (batch and/or streaming)
  • Observability: Expert understanding of metrics, logging, tracing, and alerting—you know how to make systems debuggable
  • Production Operations: Significant experience with incident response, on-call rotations, debugging live issues under pressure
  • Infrastructure Knowledge: Hands-on experience managing infrastructure, understanding resource utilization, capacity planning
  • Cost Consciousness: Experience analyzing and optimizing infrastructure costs at scale
  • Distributed Systems: Strong fundamentals in distributed systems, concurrency, consistency models, and failure scenarios
  • Accountability: Track record of taking full ownership—from design through deployment to ongoing operations and improvements


Preferred Qualifications:

  • Experience with cloud platforms (GCP, AWS, or Azure) including cost management tools
  • Kubernetes and container orchestration at scale
  • Infrastructure as Code (Terraform, Pulumi, etc.)
  • Streaming data systems (Kafka, Pub/Sub, Kinesis, Flink, etc.)
  • SRE principles and reliability engineering practices
  • Experience with FinOps or infrastructure cost optimization
  • Performance profiling and optimization (CPU, memory, I/O)
  • Technical leadership experience including mentorship of teams and driving multi-component initiatives
  • Open source contributions or recognized technical writing
Read more
Blockify
Dhanur Sehgal
Posted by Dhanur Sehgal
Remote only
3 - 8 yrs
₹6L - ₹12L / yr
skill iconGo Programming (Golang)
skill iconPython
Scalability
Infrastructure architecture

We’re hiring a remote, contract-based Backend Engineer who can build and run production systems end-to-end. You’ll own backend services (Go + Python), server setup, backend infrastructure, and cloud cost optimization—with a strong focus on reliability, scalability, and performance-per-dollar.

Responsibilities

  • Build, deploy, and maintain production backend services in Golang and Python
  • Design scalable architectures: stateless services, queues, caching, DB scaling, horizontal scaling, load balancing
  • Own backend infra end-to-end: server setup, deployments, rollbacks, monitoring, alerting, incident response
  • Set up and maintain cloud infrastructure (AWS/GCP/Azure): networking, IAM, security groups, VPC/VNet, secrets, backups
  • Improve performance and reliability: profiling, p95 latency reduction, throughput, resilience, graceful degradation
  • Drive cost optimization: right-sizing, autoscaling, storage tuning, caching strategy, data transfer reduction, reserved/savings plans
  • Implement observability: metrics, logs, tracing, dashboards, SLOs, and alerting
  • Ensure security and hygiene: least privilege, patching, key rotation, audit logs, basic threat modeling

Required skills (must-have)

  • Strong production experience with Golang (concurrency, profiling/pprof, performance tuning)
  • Strong production experience with Python (backend services, async/task systems, performance awareness)
  • Hands-on Linux server management (debugging, networking basics, processes, memory, disk I/O)
  • Experience with backend infrastructure and cloud infrastructure (AWS/GCP/Azure)
  • Proven ability to design systems that scale horizontally across multiple servers/instances
  • Track record of improving cost efficiency without breaking SLOs (latency/uptime)

Nice to have

  • Docker + Kubernetes (or strong containerization + orchestration experience)
  • Terraform/Pulumi/CloudFormation (Infrastructure-as-Code)
  • Postgres/MySQL at scale; Redis; Kafka/RabbitMQ/SQS; CDN and edge caching
  • Security best practices: WAF, rate-limits, secure secrets management, hardening
  • Experience with high-throughput systems (1,000+ RPS or large request volumes)

Contract details

  • Remote, contract role
  • Flexible time zones (overlap for standups + incident coverage)
  • Start: ASAP
  • Duration: ongoing / project-based (extendable based on performance)


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹8L / yr (Varies
)
skill iconJava
skill iconPython
Selenium Web driver
cypress
playwright

Job Description:


Test Design & Execution

Design and execute detailed, well-structured test plans, test cases, and test scenarios to ensure high-quality product releases.


Automation Development

Develop and maintain automated test scripts for functional and regression testing using tools such as Selenium, Cypress, or Playwright.


Defect Management

Identify, log, and track defects through to resolution using tools like Jira, ensuring minimal impact on production releases.


API & Backend Testing

Conduct API testing using Postman, perform backend validation, and execute database testing using SQL/Oracle.


Collaboration

Work closely with developers, product managers, and UX designers in an Agile/Scrum environment to embed quality across the SDLC.


CI/CD Integration

Integrate automated test suites into CI/CD pipelines using platforms such as Jenkins or Azure DevOps.


Required Skills & Experience

  • Minimum 2+ years of experience in Software Quality Assurance or Automation Testing.
  • Hands-on experience with Selenium WebDriver, Cypress, or Playwright.
  • Proficiency in at least one programming/scripting language: Java, Python, or JavaScript.
  • Strong experience in functional, regression, integration, and UI testing.
  • Solid understanding of SQL for data validation and backend testing.
  • Familiarity with Git for version control, Jira for defect tracking, and Postman for API testing.


Desirable Skills

  • Experience in mobile application testing (Android/iOS).
  • Exposure to performance testing tools such as JMeter.
  • Experience working with cloud platforms like AWS or Azure.


Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹16L / yr (Varies
)
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconPHP
+4 more

We are looking for a Full Stack Developer to build scalable software solutions and contribute across the entire software development lifecycle—from conception to deployment.

You will work closely with cross-functional teams and should be comfortable with both front-end and back-end technologies, modern frameworks, and third-party libraries. If you enjoy building visually appealing, functional applications and thrive in Agile environments, we’d love to connect.


Current Technologies Used

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Experience with Python and PHP is a plus, but not mandatory.


Role Responsibilities

  • Collaborate with development teams and product managers to ideate software solutions
  • Design client-side and server-side architecture
  • Build visually appealing front-end applications
  • Develop and manage efficient databases and applications
  • Write effective and scalable APIs
  • Test software for responsiveness and performance
  • Troubleshoot, debug, and upgrade systems
  • Implement security and data-protection measures
  • Build mobile-responsive features and applications
  • Create and maintain technical documentation

Candidate Requirements:


Education

  • B.Tech / BE in Computer Science, Statistics, or a relevant field

Experience

  • 2–4 years as a Full Stack Developer or in a similar role

Location

  • Bangalore (Hybrid)

Skill Set – Role Based

  • Experience building web applications
  • Familiarity with common technology stacks
  • Knowledge of front-end languages and libraries:
  • HTML, CSS, JavaScript, XML, jQuery
  • Knowledge of back-end languages and frameworks:
  • Java, Python, PHP
  • Angular, React, Svelte, Node.js
  • Familiarity with:
  • Databases: PostgreSQL, MySQL, MongoDB
  • Web servers: Apache
  • UI/UX principles

Skill Set – Behavioural

  • Excellent communication and teamwork skills
  • Strong attention to detail
  • Good organizational skills
  • Analytical mindset


Read more
npk solutions

npk solutions

Agency job
Bengaluru (Bangalore)
2 - 4 yrs
₹4L - ₹4.2L / yr
skill iconJava
skill iconPython
Communication Skills



We're looking for an experienced Zoho Developer (2-4 years) to join our team! You'll work with internal teams to understand business requirements, configure and customize Zoho apps, and deliver end-to-end solutions. You'll also provide support, troubleshoot issues, and guide junior team members. Required skills include hands-on experience with Zoho Creator, CRM, Flow, Books, Analytics, and more, plus strong problem-solving and communication skills. Experience in client-facing roles and managing multiple projects is a plus.

Read more
enParadigm

at enParadigm

2 candid answers
3 products
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
0 - 1 yrs
Upto ₹10L / yr (Varies
)
skill iconJava
skill iconPython
skill iconPHP
skill iconJavascript
skill iconAngular (2+)
+6 more

We are looking for a Junior Full Stack Developer to join our growing engineering team and contribute to building high-quality software solutions. In this role, you will support the entire development lifecycle—from design to deployment—while working closely with product managers and senior engineers.


If you have a passion for technology, enjoy learning new tools, and thrive in a collaborative environment, we’d love to hear from you.


Current Technology Stack

  • Backend: FastAPI (active), PHP (legacy), Java (legacy)
  • Frontend: Svelte, TypeScript, JavaScript

Key Responsibilities

  • Collaborate with development teams and product managers to ideate and deliver software solutions
  • Assist in designing client-side and server-side architecture
  • Contribute to building intuitive and visually appealing user interfaces
  • Support database design and application development
  • Help develop and maintain APIs
  • Participate in testing to ensure performance, scalability, and responsiveness
  • Assist in troubleshooting, debugging, and enhancing existing systems
  • Support security and data-protection initiatives
  • Contribute to mobile-responsive feature development
  • Help maintain technical documentation

Candidate Requirements

Education

  • B.Tech / BE in Computer Science, Statistics, or a related field

Location

  • Bangalore

Role-Based Skills

  • Exposure to web application development
  • Familiarity with common technology stacks
  • Basic knowledge of front-end technologies such as HTML, CSS, JavaScript, XML, and jQuery
  • Working understanding of back-end languages such as Java, Python, or PHP
  • Familiarity with JavaScript frameworks/libraries like Angular, React, Svelte, or Node.js
  • Awareness of databases such as PostgreSQL, MySQL, or MongoDB
  • Basic understanding of web servers (e.g., Apache) and UI/UX principles

Behavioral Skills

  • Strong communication and teamwork abilities
  • High attention to detail
  • Good organizational skills
  • Analytical and problem-solving mindset


Read more
Srijan Technologies

at Srijan Technologies

6 recruiters
Devendra Singh
Posted by Devendra Singh
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5.5 - 7 yrs
₹20L - ₹35L / yr
skill iconPython
pandas
PySpark

About US:-

We turn customer challenges into growth opportunities.

Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.

We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.

Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.


Role Summary:-

We are looking for a Lead Data Engineer to design, build, and scale the or Customer data engineering platforms.


This role is hands-on and requires deep expertise in Python-based data engineering with Pyspark and use of Microsoft Fabric ecosystem. The candidate will work on building analytics-heavy workflows, converting business logic and analytical models into production-grade pipelines, and enabling enterprise-scale insights.

 

Key Responsibilities

  • Design, develop, and maintain scalable Python-based data integration pipelines.
  • Pandas, Numpy, Pyarrow, Polars, Dask etc.
  • expertise with Pyspark !
  • Build and optimize data workflows using Microsoft Fabric (Lakehouse, Pipelines, Notebooks, Dataflows Gen2).
  • Convert complex business logic, Excel models, and analytical workflows into Python/Pyspark pipelines.
  • Implement high-performance ETL / ELT pipelines for large-scale analytics.
  • Ensure data quality, validation, reconciliation, and monitoring.
  • Design robust data models and semantic layers for analytics and BI.
  • Collaborate with analytics, BI, and product teams to deliver end-to-end data solutions.
  • Mentor engineers and contribute to platform and team scaling.

Required Skills & Experience

  • 6+ years of experience in Data Engineering, with strong hands-on Python / Pyspark skills.
  • Proven ability to translate Excel based mapping spreadsheets / business calculations into Python / Pyspark data engineering workflows.
  • Experience working in a Lead / Senior Data Engineer capacity.
  • Strong experience with Microsoft Fabric ecosystem.
  • Strong SQL, data integration, and performance tuning skills.
  • Experience with large-scale analytical data platforms (Databricks / Fabric / Synapse ).
  • Ability to work independently with minimal supervision.
Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Amruta Mundale
Posted by Amruta Mundale
Pune
8 - 10 yrs
Best in industry
Technical support
SQL
Apache
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

What You’ll Do:

We are looking for a Staff Operations Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.  

This role will be in the Engineering Operations team and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.  

  • Serve as the Engineering interface between Analytics and Engineering teams.
  • Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
  • Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
  • Build “mastered” versions of the data for Analytics-specific querying use cases.
  • Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent.
  • Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
  • Implement DataOps practices.
  • Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
  • Operate between Engineers and Analysts to unify both practices for analytics insight creation.

Who You Are:

  • 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
  • Adept in market research methodologies and using data to deliver representative insights.
  • Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
  • Deep SQL experience is a must.
  • Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
  • English Language Fluency and proven success working with teams in the U.S.
  • Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
  • Proficient with SQL, Python or JVM-based language, Bash.
  • Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc. 
  • Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
  • Experience in debugging UI and Backend issues will be add on.


Read more
BigThinkCode Technologies
Kumar AGS
Posted by Kumar AGS
Chennai
2 - 5 yrs
₹1L - ₹15L / yr
skill iconPython
skill iconDjango
API
FastAPI

At BigThinkCode, our technology solves complex problems. We are looking for a talented engineer to join our technology team at Chennai.

  

This is an opportunity to join a growing team and make a substantial impact at BigThinkCode. We have a challenging workplace where we welcome innovative ideas / talents and offers growth opportunities and positive environment.

 

Below job description for your reference, if interested please share your profile to connect and discuss.

 

Company: BigThinkCode Technologies

URL: https://www.bigthinkcode.com/

Job Role: Software Engineer / Senior Software Engineer

Experience: 2 - 5 years

Work location: Chennai

Work Mode: Hybrid

Joining time: Immediate – 4 weeks


Responsibilities

  • Build and enhance backend features as part of the tech team.
  • Take ownership of features end-to-end in a fast-paced product/startup environment.
  • Collaborate with managers, designers, and engineers to deliver user-facing functionality.
  • Design and implement scalable REST APIs and supporting backend systems.
  • Write clean, reusable, well-tested code and contribute to internal libraries.
  • Participate in requirement discussions and translate business needs into technical tasks.
  • Support the technical roadmap through architectural input and continuous improvement.

 

Required Skills:

  • Strong understanding of Algorithms, Data Structures, and OOP principles.
  • Integrate with third-party systems (payment/SMS APIs, mapping services, etc.).
  • Proficiency in Python and experience with at least one framework (Flask / Django / FastAPI).
  • Hands-on experience with design patterns, debugging, and unit testing (pytest/unittest).
  • Working knowledge of relational or NoSQL databases and ORMs (SQLAlchemy / Django ORM).
  • Familiarity with asynchronous programming (async/await, FastAPI async).
  • Experience with caching mechanisms (Redis).
  • Ability to perform code reviews and maintain code quality.
  • Exposure to cloud platforms (AWS/Azure/GCP) and containerization (Docker).
  • Experience with CI/CD pipelines.
  • Basic understanding of message brokers (RabbitMQ / Kafka / Redis streams).

 

Benefits:

· Medical cover for employee and eligible dependents.

· Tax beneficial salary structure.

· Comprehensive leave policy

· Competency development training programs.

 

Read more
Ekloud INC
ashwini rathod
Posted by ashwini rathod
india
5 - 20 yrs
₹5L - ₹30L / yr
ADF
databricks
PySpark
SQL
skill iconPython
+2 more

Hiring : Azure Data Engineer


Experience level: 5 yrs – 12yrs

Location : Bangalore

Work arrangement : On-site

Budget Range: Flexible


Mandatory Skill :


Self-Rating (7+ is must)

ADF, Databricks , Pyspark , SQL - Mandatory

Good to have :-

Delta Live table , Python , Team handling-

Manager ( 7+yrs exp) ,

Azure functions, Unity catalog, real-time streaming , Data pipelines

Read more
Appiness Interactive
Shashirekha S
Posted by Shashirekha S
Bengaluru (Bangalore)
5 - 13 yrs
₹10L - ₹23L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Large Language Models (LLM) tuning
Vector database
+4 more

Required Skills & Qualifications

● Strong hands-on experience with LLM frameworks and models, including LangChain,

OpenAI (GPT-4), and LLaMA

● Proven experience in LLM orchestration, workflow management, and multi-agent

system design using frameworks such as LangGraph

● Strong problem-solving skills with the ability to propose end-to-end solutions and

contribute at an architectural/system design level

● Experience building scalable AI-backed backend services using FastAPI and

asynchronous programming patterns

● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load

Balancers

● Hands-on experience with Docker and containerization for deploying and managing

AI/ML applications

● Good understanding of Transformer-based architectures and how modern LLMs work

internally

● Strong skills in data processing and analysis using NumPy and Pandas

● Experience with data visualization tools such as Matplotlib and Seaborn for analysis

and insights

● Hands-on experience with Retrieval-Augmented Generation (RAG), including

document ingestion, embeddings, and vector search pipelines

● Experience in model optimization and training techniques, including fine-tuning,

LoRA, and QLoRA


Nice to Have / Preferred

● Experience designing and operating production-grade AI systems


● Familiarity with cost optimization, observability, and performance tuning for

LLM-based applications

● Exposure to multi-cloud or large-scale AI platforms

Read more
CoPoint Data
Alankrita Bhattacharyya
Posted by Alankrita Bhattacharyya
Gurugram
5 - 10 yrs
₹20L - ₹35L / yr
skill iconNodeJS (Node.js)
ASP.NET
skill iconReact.js
skill iconJava
skill iconJavascript
+9 more

About CoPoint AI


CoPoint AI is a specialized consulting firm focused on transforming businesses through process improvement, data insights, and technology-driven innovation. We leverage AI technologies, Microsoft cloud platforms, and modern web development frameworks to deliver intelligent, scalable solutions that drive measurable impact for our clients. Our team partners across industries to design and deploy solutions that streamline operations, enhance customer experiences, and enable data-driven growth.


Our Vision 

We transform businesses through process improvement and data insights leveraging AI on the Microsoft stack


Our Values

  • Be Purposeful: Think Straight, Communicate, Always Do The Right Thing
  • In Partnership: With our Team, For our Clients, In our communities. 
  • Create Impact: Deliver value-based solution, help Individual achieve their dream, demand profitable growth.


Role Overview

As a Senior Consultant at CoPoint AI, you will drive end-to-end delivery of both AI-enabled data solutions and modern web applications. You will blend technical expertise in AI, Microsoft platforms, and full-stack web development with business insight to architect and implement impactful solutions across client environments.

Key Responsibilities

  • Lead design and implementation of end-to-end data, AI, and web application solutions
  • Architect and build responsive, user-friendly web interfaces integrated with enterprise data systems
  • Develop and optimize secure, scalable APIs and microservices using cloud-native principles
  • Implement AI-powered features in web applications using LLMs, Azure OpenAI, and Cognitive Services
  • Guide teams in AI-assisted software development lifecycle improvements
  • Build frameworks for responsible AI governance and model monitoring
  • Comprehensive understanding of the AI solution landscape
  • Leadership experience in AI-enabled digital transformation initiatives
  • Expertise in AI adoption strategies and change management
  • Ability to translate AI capabilities into measurable business value
  • Design multi-model architectures combining analytics, AI, and web experiences
  • Act as a subject matter expert in Microsoft Azure and modern web frameworks (e.g., React, Angular, .NET Core)
  • Manage project work streams and lead cross-functional delivery teams
  • Cultivate and manage client relationships, providing strategic and technical guidance
  • Identify and propose innovation opportunities through data and digital experiences
  •  Mentor junior developers, analysts, and consultants
  • Ensure high quality and consistency in solution delivery and user experience


Qualifications

  • Deep expertise in Microsoft data technologies (Azure Data Factory, Synapse, Power BI). 
  • Proven experience implementing enterprise AI solutions on Azure 
  • Advanced knowledge of large language models and their business applications 
  • Expertise in AI-enhanced software development methodologies 
  • Experience with AI model evaluation, validation, and responsible deployment 
  • Proficiency in developing custom AI solutions using Azure OpenAI, Cognitive Services, and ML services 
  • Experience integrating AI into existing enterprise applications and data platforms 
  • Experience managing client expectations and delivering high-quality solutions 
  • Strong technical leadership and problem-solving capabilities 
  • Excellent communication and presentation skills 
  • Ability to anticipate client needs and propose strategic solutions


What should You expect:


  • A culture of continuous learning with certification support.
  • Clear career advancement pathways.
  • Competitive compensation and benefits.
  • Flexible work arrangements.
  • A collaborative environment that values innovation and creativity.



Ready to shape the future of enterprise technology? Join our team of Microsoft technology experts and make an impact.


Read more
Pentabay Softwares

at Pentabay Softwares

1 recruiter
Sandhiya M
Posted by Sandhiya M
Chennai
0.5 - 4 yrs
₹2L - ₹6L / yr
skill iconMongoDB
express js
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+4 more

Job Title: MERN Stack Developer

Company: Pentabay Softwares

Location: Anna Salai (Mount Road), Chennai

Employment Type: Full-Time

Experience Required: 1–4 Years


Job Description:


Pentabay Softwares is looking for a skilled and motivated MERN Stack Developer to join our growing team. The ideal candidate should have hands-on experience in developing scalable web applications using MongoDB, Express.js, React.js, and Node.js.


Roles & Responsibilities:

  • Develop and maintain web applications using the MERN stack
  • Build reusable, efficient, and scalable code
  • Collaborate with UI/UX designers and backend teams
  • Design and integrate RESTful APIs
  • Troubleshoot, debug, and optimize application performance
  • Participate in code reviews and follow best development practices
  • Work closely with project managers to meet deadlines


Required Skills:

  • Strong experience with MongoDB, Express.js, React.js, and Node.js
  • Proficiency in JavaScript (ES6+), HTML5, and CSS3
  • Experience with REST APIs and third-party integrations
  • Knowledge of Git/version control systems
  • Basic understanding of security and performance optimization
  • Familiarity with Agile/Scrum methodology

Good to Have:

  • Experience with Redux, Next.js, or TypeScript
  • Exposure to cloud platforms (AWS, Azure, or GCP)
  • Understanding of CI/CD pipelines

Who Can Apply:

  • Candidates with 1–4 years of relevant experience
  • Strong problem-solving and communication skills
  • Ability to work independently and as part of a team

Work Location:

📍 Anna Salai (Mount Road), Chennai

Read more
Impacto Digifin Technologies

at Impacto Digifin Technologies

4 candid answers
1 recruiter
Navitha Reddy
Posted by Navitha Reddy
Bengaluru (Bangalore)
1 - 4 yrs
₹5L - ₹7L / yr
skill iconPython
Automation
Test Automation (QA)
Object Oriented Programming (OOPs)
RESTful APIs
+10 more

Job Description: Python Automation Engineer Location: Bangalore (Office-based) Experience: 1–2 Years Joining: Immediate to 30 Days Role Overview We are looking for a Python Automation Engineer who combines strong programming skills with hands-on automation expertise. This role involves developing automation scripts, designing automation frameworks, and contributing independently to automation solutions, with leads delegating tasks and solution directions. The ideal candidate is not a novice—they have solid real-world Python experience and are comfortable working across API automation, automation tooling, and CI/CD-driven environments. Key Responsibilities Design, develop, and maintain automation scripts and reusable automation frameworks using Python Build and enhance API automation for REST-based services and common backend frameworks Independently own automation tasks and deliver solutions with minimal supervision Collaborate with leads and engineering teams to understand automation requirements Maintain clean, modular, and scalable automation code Occasionally review automation code written by other team members Integrate automation suites with CI/CD pipelines Package and ship automation tools/frameworks using containerization Required Skills & Qualifications Python (Core Requirement) Strong, in-depth hands-on experience in Python, including: Object-Oriented Programming (OOP) and modular design Writing reusable libraries and frameworks Exception handling, logging, and debugging Asynchronous concepts, performance-aware coding Unit testing and test automation practices Code quality, readability, and maintainability API Automation Strong experience automating REST APIs Hands-on with common Python API libraries (e.g., requests, httpx, or equivalent) Understanding of API request/response handling, validations, and workflows Familiarity with different backend frameworks and fast APIs DevOps & Engineering Practices (Must-Have) Strong knowledge of Git Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, or similar) Ability to integrate automation suites into pipelines Hands-on experience with Docker for shipping automation tools/frameworks Good-to-Have Skills UI automation using Selenium (Page Object Model, cross-browser testing, headless execution) Exposure to Playwright for UI automation Basic working knowledge of Java and/or JavaScript (reading, writing small scripts, debugging) Understanding of API authentication, retries, mocking, and related best practices Domain Exposure Experience or interest in SaaS platforms Exposure to AI / ML-based platforms is a plus What We’re Looking For A strong engineering mindset, not just tool usage Someone who can build automation systems, not only execute test cases Comfortable working independently while aligning with technical leads Passion for clean code, scalable automation, and continuous improvement SKILLA IN 1 WORKKD TO PUT IN KEYSKILL SECTION 

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
3 - 4 yrs
₹6L - ₹14L / yr
SQL
skill iconPython

Job Description:

Summary

The Data Engineer will be responsible for designing, developing, and maintaining the data infrastructure. They must have experience with SQL and Python.

Roles & Responsibilities:

● Collaborate with product, business, and engineering stakeholders to understand key metrics, data needs, and reporting pain points.

● Design, build, and maintain clean, scalable, and reliable data models using DBT.

● Write performant SQL and Python code to transform raw data into structured marts and reporting layers.

● Create dashboards using Tableau or similar tools.

● Work closely with data platform engineers, architects, and analysts to ensure data pipelines are resilient, well-governed, and high quality.

● Define and maintain source-of-truth metrics and documentation in the analytics layer.

● Partner with product engineering teams to understand new features and ensure appropriate

instrumentation and event collection.

● Drive reporting outcomes by building dashboards or working with BI teams to ensure timely delivery of insights.

● Help scale our analytics engineering practice by contributing to internal tooling, frameworks, and best practices.

Who You Are:

Experience : 3 to 4 years of experience in analytics/data engineering, with strong hands-on expertise in DBT, SQL, Python and dashboarding tools.

● Experience working with modern data stacks (e.g., Snowflake, BigQuery, Redshift, Airflow).

● Strong data modeling skills (dimensional, star/snowflake schema, data vault, etc.).

● Excellent communication and stakeholder management skills.

● Ability to work independently and drive business outcomes through data.

● Exposure to product instrumentation and working with event-driven data is a plus.

● Prior experience in a fast-paced, product-led company is preferred.

Read more
Fireblaze Technologies
Fireblaze Technologies
Posted by Fireblaze Technologies
Nagpur
1 - 2 yrs
₹2L - ₹3L / yr
Training and Development
Microsoft Excel
SQL
skill iconPython
NumPy
+2 more

Deliver engaging classroom and/or online training sessions on topics including:


Python for Data Science Data Analytics using Excel and SQL

Statistics and Probability

Machine Learning and Deep Learning

Data Visualization using Power BI / Tableau

Create and update course materials, projects, assignments, and quizzes.

Provide hands-on training and real-world project guidance.

Evaluate student performance, provide constructive feedback, and track progress.

Stay updated with the latest trends, tools, and technologies in Data Science.

Mentor students during capstone projects and industry case studies.

Coordinate with the academic and operations team for batch planning and feedback.

Assist with the development of new courses and curriculum as needed.

Read more
Euphoric Thought Technologies
Remote, Bengaluru (Bangalore)
3 - 4 yrs
₹11L - ₹13L / yr
skill iconPython
SQL

We are seeking a Data Engineer with 3–4 years of relevant experience to join our team. The ideal candidate should have strong expertise in Python and SQL and be available to join immediately.

Location: Bangalore

Experience: 3–4 Years

Joining: Immediate Joiner preferred

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines and data models
  • Extract, transform, and load (ETL) data from multiple sources
  • Write efficient and optimized SQL queries for data analysis and reporting
  • Develop data processing scripts and automation using Python
  • Ensure data quality, integrity, and performance across systems
  • Collaborate with cross-functional teams to support business and analytics needs
  • Troubleshoot data-related issues and optimize existing processes

Required Skills & Qualifications:

  • 3–4 years of hands-on experience as a Data Engineer or similar role
  • Strong proficiency in Python and SQL
  • Experience working with relational databases and large datasets
  • Good understanding of data warehousing and ETL concepts
  • Strong analytical and problem-solving skills
  • Ability to work independently and in a team-oriented environment

Preferred:

  • Experience with cloud platforms or data tools (added advantage)
  • Exposure to performance tuning and data optimization





Read more
IXG Inc
Hyderabad
0 - 1 yrs
₹20000 - ₹40000 / mo
Design thinking
AI Agents
skill iconPython
skill iconMongoDB
skill iconPostgreSQL
+1 more

AI-Native Software Developer Intern


Build real AI agents used daily across the company

We’re looking for a high-agency, AI-native software developer intern to help us build internal AI agents that improve productivity across our entire company (80–100 people using them daily).


You will ship real systems, used by real teams, with real impact.

If you’ve never built anything outside coursework, this role is probably not a fit.


What You’ll Work On

You will work directly on designing, building, deploying, and iterating AI agents that power internal workflows.

Examples of problems you may tackle:


Internal AI agents for:

  • Knowledge retrieval across Notion / docs / Slack
  • Automated report generation
  • Customer support assistance
  • Process automation (ops, hiring, onboarding, etc.)
  • Decision-support copilots
  • Prompt engineering + structured outputs + tool-using agents

Building workflows using:

  • LLM APIs
  • Vector databases
  • Agent frameworks
  • Internal dashboards
  • Improving reliability, latency, cost, and usability of AI systems
  • Designing real UX around AI tools (not just scripts)

You will own features end-to-end:

  • Problem understanding
  • Solution design
  • Implementation
  • Testing
  • Deployment
  • Iteration based on user feedback


What We Expect From You

You must:

  • Be AI-native: you actively use tools like:
  • ChatGPT / Claude / Cursor / Copilot
  • AI for debugging, scaffolding, refactoring
  • Prompt iteration
  • Rapid prototyping
  • Be comfortable with at least one programming language (Python, TypeScript, JS, etc.)
  • Have strong critical thinking
  • You question requirements
  • You think about edge cases
  • You optimize systems, not just make them “work”
  • Be high agency
  • You don’t wait for step-by-step instructions
  • You proactively propose solutions
  • You take ownership of outcomes
  • Be able to learn fast on the job

Help will be provided but you will not be spoonfed.


Absolute Requirement (Non-Negotiable)

If you have not built any side projects with a visible output, you will most likely be rejected.

We expect at least one of:

  • A deployed web app
  • A GitHub repo with meaningful commits
  • A working AI tool
  • A live demo link
  • A product you built and shipped
  • An agent, automation, bot, or workflow you created


Bonus Points (Strong Signals)

These are not required but will strongly differentiate you:

  • Built projects using:
  • LLM APIs (OpenAI, Anthropic, etc.)
  • LangChain / LlamaIndex / custom agent frameworks
  • Vector DBs like Pinecone, Weaviate, FAISS
  • RAG systems
  • Experience deploying:
  • Vercel, Fly.io, Render, AWS, etc.
  • Built internal tools for a team before
  • Strong product intuition (you care about UX, not just code)
  • Experience automating your own workflows using scripts or AI


What You’ll Gain

You will get:

  • Real experience building AI agents used daily
  • Ownership over production systems
  • Deep exposure to:
  • AI architecture
  • Product thinking
  • Iterative engineering
  • Tradeoffs (cost vs latency vs accuracy)
  • A portfolio that actually means something in 2026
  • A strong shot at long-term roles based on performance

If you perform well, you won’t leave with a certificate, you'll leave with real-world building experience.


Who This Is Perfect For

  • People who already build things for fun
  • People who automate their own life with scripts/tools
  • People who learn by shipping
  • People who prefer responsibility over structure
  • People who are excited by ambiguity

Who This Is Not For

Be honest with yourself:

  • If you need step-by-step instructions
  • If you avoid open-ended problems
  • If you’ve never built anything outside assignments
  • If you dislike using AI tools while coding

This will be frustrating for you.


How To Apply

Send:

  • Your GitHub
  • Links to projects (deployed preferred)
  • A short note explaining:
  • What you built
  • Why you built it
  • What you’d improve if you had more time

Strong portfolios beat strong resumes.

Read more
Noida
3 - 4 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
skill iconPostman
skill iconHTML/CSS
+2 more

Build and maintain scalable web applications using Python + Django

Develop REST APIs using Django REST Framework (DRF) for internal and partner integrations

Work on frontend screens (templates / HTML / CSS / JS) and integrate APIs in the UI

Implement authentication/authorization, validations, and secure coding practices

Work with databases (MySQL/PostgreSQL), ORM, migrations, indexing, and query optimization

Deploy and manage apps on Azure (App Service / VM / Storage / Azure SQL as applicable)

Integrate third-party services (payment, SMS/email, partner APIs) when required

Write clean, maintainable code, and support production debugging & performance improvements

Collaborate with product/ops teams to deliver features on time


Must Have Skills

  • Python, Django (2–4 years hands-on)
  • Django REST Framework (DRF) – building and consuming REST APIs
  • Strong understanding of SQL and relational databases (MySQL/PostgreSQL)
  • Frontend basics: HTML, CSS, JavaScript, Bootstrap (enough to handle screens + API integration)
  • Experience with Git and standard development workflows
  • Comfortable working on deployments and environments on Azure

Good to Have (Preferred)

  • Azure exposure: App Service, Azure Storage, Azure SQL, Key Vault, CI/CD (Azure DevOps)
  • Background jobs: Celery / Redis or cron-based scheduling
  • Basic understanding of security practices: JWT/session auth, permissions, rate limiting
  • Experience in fintech / gift cards / loyalty / voucher systems is a plus
  • Unit testing (pytest/Django test framework) and basic logging/monitoring


Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Mumbai
3 - 5 yrs
₹8L - ₹12L / yr
DAX
skill iconPython
SQL
Data modeling

About the Role:


We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.


Key Responsibilities:


Data Modelling & Visualization

  • Build scalable and high-quality data models in Power BI using best practices.
  • Define relationships, hierarchies, and measures to support effective storytelling.
  • Ensure dashboards meet standards in accuracy, visualization principles, and timelines.


Data Transformation & ETL

  • Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
  • Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
  • Manage and process large-scale datasets from various sources and formats.


Business Problem Translation

  • Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
  • Decompose business questions into testable hypotheses and identify relevant datasets for validation.


Performance & Troubleshooting

  • Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
  • Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.


Analytical Storytelling

  • Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
  • Develop data narratives that drive business impact.


Solution Design

  • Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.


Required Skills & Experience:


  • Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
  • Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
  • Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
  • Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
  • Familiarity with DAX Studio and Tabular Editor.
  • Experience in handling high-volume data in production environments.


Preferred (Good to Have):


  • Exposure to Big Data technologies such as:
  • PySpark
  • Hadoop
  • Hive / HDFS
  • Spark Streaming (optional but preferred)


Why Join Us?


  • Work with a team that's passionate about data innovation.
  • Exposure to modern data stack and tools.
  • Flat structure and collaborative culture.
  • Opportunity to influence data strategy and architecture decisions.

 

Read more
Remote only
4 - 5 yrs
₹7L - ₹15L / yr
SQL
PL/SQL, T-SQL, PostgreSQL, or MySQL
skill iconPython
Pandas, NumPy, SQLAlchemy, Psycopg2.
Database Design
+12 more

Database Programmer (SQL & Python)

Experience: 4 – 5 Years

Location: Remote

Employment Type: Full-Time

About the Opportunity

We are a mission-driven HealthTech organization dedicated to bridging the gap in global healthcare equity. By harnessing the power of AI-driven clinical insights and real-world evidence, we help healthcare providers and pharmaceutical companies deliver precision medicine to underrepresented populations.

We are looking for a skilled Database Programmer with a strong blend of SQL expertise and Python automation skills to help us manage, transform, and unlock the value of complex clinical data. This is a fully remote role where your work will directly contribute to improving patient outcomes and making life-saving treatments more affordable and accessible.


Key Responsibilities

  • Data Architecture & Management: Design, develop, and maintain robust relational databases to store large-scale, longitudinal patient records and clinical data.
  • Complex Querying: Write and optimize sophisticated SQL queries, stored procedures, and triggers to handle deep clinical datasets, ensuring high performance and data integrity.
  • Python Automation: Develop Python scripts and ETL pipelines to automate data ingestion, cleaning, and transformation from diverse sources (EHRs, lab reports, and unstructured clinical notes).
  • AI Support: Collaborate with Data Scientists to prepare datasets for AI-based analytics, Knowledge Graphs, and predictive modeling.
  • Data Standardization: Map and transform clinical data into standardized models (such as HL7, FHIR, or proprietary formats) to ensure interoperability across healthcare ecosystems.
  • Security & Compliance: Implement and maintain rigorous data security protocols, ensuring all database activities comply with global healthcare regulations (e.g., HIPAA, GDPR).


Required Skills & Qualifications

  • Education: Bachelor’s degree in Computer Science, Information Technology, Statistics, or a related field.
  • SQL Mastery: 4+ years of experience with relational databases (PostgreSQL, MySQL, or MS SQL Server). You should be comfortable with performance tuning and complex data modeling.
  • Python Proficiency: Strong programming skills in Python, particularly for data manipulation (Pandas, NumPy) and database interaction (SQLAlchemy, Psycopg2).
  • Healthcare Experience: Familiarity with healthcare data standards (HL7, FHIR) or experience working with Electronic Health Records (EHR) is highly preferred.
  • ETL Expertise: Proven track record of building and managing end-to-end data pipelines for structured and unstructured data.
  • Analytical Mindset: Ability to troubleshoot complex data issues and translate business requirements into efficient technical solutions.


To process your details please fill-out the google form.

https://forms.gle/4psh2vaUi115TKnm6

Read more
GeniWay Technologies

at GeniWay Technologies

1 candid answer
GeniWay Hiring
Posted by GeniWay Hiring
Pune
2 - 3 yrs
₹8L - ₹10L / yr
skill iconPython
FastAPI
SQL
skill iconNodeJS (Node.js)
Database modeling
+5 more

About Company (GeniWay)

GeniWay Technologies is pioneering India’s first AI-native platform for personalized learning and career guidance, transforming the way students learn, grow, and determine their future path. Addressing challenges in the K-12 system such as one-size-fits-all teaching and limited career awareness, GeniWay leverages cutting-edge AI to create a tailored educational experience for every student. The core technology includes an AI-powered learning engine, a 24x7 multilingual virtual tutor and Clario, a psychometrics-backed career guidance system. Aligned with NEP 2020 policies, GeniWay is on a mission to make high-quality learning accessible to every student in India, regardless of their background or region.


What you’ll do

  • Build the career assessment backbone: attempt lifecycle (create/resume/submit), timing metadata, partial attempts, idempotent APIs.
  • Implement deterministic scoring pipelines with versioning and audit trails (what changed, when, why).
  • Own Postgres data modeling: schemas, constraints, migrations, indexes, query performance.
  • Create safe, structured GenAI context payloads (controlled vocabulary, safety flags, eval datasets) to power parent/student narratives.
  • Raise reliability: tests for edge cases, monitoring, reprocessing/recalculation jobs, safe logging (no PII leakage).


Must-have skills

  • Backend development in Python (FastAPI/Django/Flask) or Node (NestJS) with production API experience.
  • Strong SQL + PostgreSQL fundamentals (transactions, indexes, schema design, migrations).
  • Testing discipline: unit + integration tests for logic-heavy code; systematic debugging approach.
  • Comfort using AI coding copilots to speed up scaffolding/tests/refactors — while validating correctness.
  • Ownership mindset: cares about correctness, data integrity, and reliability.


Good to have

  • Experience with rule engines, scoring systems, or audit-heavy domains (fintech, healthcare, compliance).
  • Event schemas/telemetry pipelines and observability basics.
  • Exposure to RAG/embeddings/vector DBs or prompt evaluation harnesses.


Location: Pune (on-site for first 3 months; hybrid/WFH flexibility thereafter)

Employment Type: Full-time

Experience: 2–3 years (correctness-first; strong learning velocity)

Compensation: Competitive (₹8–10 LPA fixed cash) + ESOP (equity ownership, founding-early employee level)

Joining Timeline: 2–3 weeks / Immediate


Why join (founding team)

  • You’ll build core IP: scoring integrity and data foundations that everything else depends on.
  • Rare skill-building: reliable systems + GenAI-safe context/evals (not just API calls).
  • Meaningful ESOP upside at an early stage.
  • High trust, high ownership, fast learning.
  • High-impact mission: reduce confusion and conflict in student career decisions; help families make better choices, transform student lives by making great learning personal.


Hiring process (fast)

1.      20-min intro call (fit + expectations).

2.      45–60 min SQL & data modeling, API deep dive.

3.      Practical exercise (2–3 hours max) implementing a small scoring service with tests.

4.      Final conversation + offer.


How to apply

Reply with your resume/LinkedIn profile plus one example of a system/feature where you owned data modeling and backend integration (a short paragraph is fine).

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Hyderabad
5 - 7 yrs
₹15L - ₹21L / yr
skill iconPython
AWS Glue
PySpark
Terraform

Hiring for Data Enginner


Exp : 5 - 7 yrs

Work Location : Hyderbad Hybrid


Must Skills : Python, AWS Glue , PySpark, Terraform

Read more
Pune
3 - 5 yrs
₹12L - ₹15L / yr
skill iconJavascript
skill iconReact.js
FastAPI
TypeScript
skill iconPython
+3 more

About Company (GeniWay)

GeniWay Technologies is pioneering India’s first AI-native platform for personalized learning and career guidance, transforming the way students learn, grow, and determine their future path. Addressing challenges in the K-12 system such as one-size-fits-all teaching and limited career awareness, GeniWay leverages cutting-edge AI to create a tailored educational experience for every student. The core technology includes an AI-powered learning engine, a 24x7 multilingual virtual tutor and Clario, a psychometrics-backed career guidance system. Aligned with NEP 2020 policies, GeniWay is on a mission to make high-quality learning accessible to every student in India, regardless of their background or region.

What you’ll do

  • Own and ship end-to-end product journeys (mobile-first): onboarding → assessment runner → results → career map → parent alignment.
  • Build/maintain backend APIs and shared platform capabilities (auth, sessions, data contracts, telemetry).
  • Integrate GenAI responsibly: prompt/versioning, eval harnesses, guardrails, fallbacks (AI is core, not a side feature).
  • Set the engineering quality bar: code reviews, tests, CI/CD, release gating, observability, performance budgets.
  • Mentor and lead a lean pod; grow into Lead Engineer responsibility within ~6 months based on delivery.

Must-have skills

  • Strong TypeScript + React/Next.js (or equivalent) and proven experience shipping state-heavy UIs.
  • Backend/API development (Node/NestJS or Python/FastAPI) with solid error handling and clean contracts.
  • Good SQL fundamentals and hands-on PostgreSQL.
  • Comfort using AI coding copilots (Copilot/Cursor) to accelerate scaffolding/tests/refactors — with rigorous verification.
  • Startup mindset: ownership, ambiguity tolerance, and ability to ship weekly.

Good to have

  • Hands-on GenAI product work: tool calling, RAG/embeddings, vector DBs (Qdrant/Pinecone), LangChain/LlamaIndex (or similar).
  • Experience with conversational flows (WhatsApp or chat-like UX).
  • DevOps/observability basics (logs/metrics/traces).
  • Public proof of ownership: OSS, side projects, hackathons, shipped 0→1 features.

Location: Pune (on-site for first 3 months; hybrid/WFH flexibility thereafter)

Employment Type: Full-time Experience: 3–4 years (high ownership; leadership potential)

Compensation: Competitive (₹12–15 LPA fixed cash) + ESOP (equity ownership, founding-early employee level).

Standard benefits: Health insurance, paid leave, learning/training budget.

Joining Timeline: 2–3 weeks / Immediate


Why join (founding team)

  • Meaningful ownership: ESOP at an early stage (real upside, not token equity).
  • Career acceleration: scope and autonomy typically seen much later in larger orgs.
  • AI-first engineering culture: copilots + LLM workflows across SDLC, with strong discipline on correctness and safety.
  • High-impact mission: reduce confusion and conflict in student career decisions; help families make better choices, transform student lives by making great learning personal.
  • Lean, high-trust team: direct access to founder + fast decisions; minimal bureaucracy.


Hiring process (fast)

  1. 20-min intro call (fit + expectations).
  2. 60–90 min technical deep dive (system design + trade-offs).
  3. Practical exercise (1–2 hours max) — focused and relevant (assessment flow or GenAI eval harness).
  4. Final conversation + offer.


How to apply 

Reply with your resume/LinkedIn profile and 2 links (any of: GitHub, portfolio, shipped product, blog, or a short note describing a feature you owned end-to-end).

Read more
Startup

Startup

Agency job
via Techno Wise by Chanchal Amin
Hyderabad
2 - 6 yrs
₹8L - ₹11L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
SQL
+6 more

Required Skills and Qualifications:

  • 2–3 years of professional experience in Python development.
  • Strong understanding of object-oriented programming.
  • Experience with frameworks such as DjangoFlask, or FastAPI.
  • Knowledge of REST APIsJSON, and web integration.
  • Familiarity with SQL and database management systems.
  • Experience with Git or other version control tools.
  • Good problem-solving and debugging skills.
  • Strong communication and teamwork abilities.


Read more
Mindreams Infotech Pvt Ltd
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconReact.js

What you’ll do


  • Build and scale backend services and APIs using Python
  • Work on cross-language integrations (Python ↔ PHP)
  • Develop frontend features using React (Angular is a plus)
  • Deploy, monitor, and manage applications on AWS
  • Own features end-to-end: development, performance, and reliability
  • Collaborate closely with product, QA, and engineering teams


Tech Stack


  • Backend: Python (working knowledge of PHP is a strong plus)
  • Frontend: React (Angular is a plus)
  • Cloud: AWS
  • Version Control: Git / GitHub


Experience


  • 5–10 years of professional software development experience
  • Strong hands-on experience with Python
  • Hands-on experience deploying and managing applications on AWS
  • Working knowledge of modern frontend frameworks


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
4 - 10 yrs
₹12L - ₹15L / yr
Insights Manager
skill iconData Analytics
Ecommerce
D2C
SQL
+7 more

Position: Insights Manager

Location: Gurugram (Onsite)

Experience Required: 4+ Years

Working Days: 5 Days (Mon to Fri)

About the Role

We are seeking a hands-on Insights Manager to build the analytical backbone that powers decision-making. This role sits at the centre of the data ecosystem, partnering with Category, Commercial, Marketing, Sourcing, Fulfilment, Product, and Growth teams to translate data into insight, automation, and action.

You will design self-running reporting systems, maintain data quality in collaboration with data engineering, and build analytical models that directly improve pricing, customer experience, and operational efficiency. The role requires strong e-commerce domain understanding and the ability to move from data to decisions with speed and precision.

Key Responsibilities

1. Data Platform & Governance

  • Partner with data engineering to ensure clean and reliable data across Shopify, GA4, Ad platforms, CRM, and ERP systems
  • Define and maintain KPI frameworks (ATC, CVR, AOV, Repeat Rate, Refunds, LTV, CAC, etc.)
  • Oversee pipeline monitoring, QA checks, and metric documentation

2. Reporting, Dashboards & Automation

  • Build automated datamarts and dashboards for business teams
  • Integrate APIs and automate data flows across multiple sources
  • Create actionable visual stories and executive summaries
  • Use AI and automation tools to improve insight delivery speed

3. Decision Models & Applied Analytics

  • Build models for pricing, discounting, customer segmentation, inventory planning, delivery SLAs, and recommendations
  • Translate analytics outputs into actionable playbooks for internal teams

4. Insights & Actionability

  • Diagnose performance shifts and identify root causes
  • Deliver weekly and monthly insight-driven recommendations
  • Improve decision-making speed and quality across functions

Qualifications & Experience

  • 4–7 years of experience in analytics or product insights (e-commerce / D2C / retail)
  • Strong SQL and Python skills
  • Hands-on experience with GA4, GTM, and dashboarding tools (Looker / Tableau / Power BI)
  • Familiarity with CRM platforms like Klaviyo, WebEngage, or MoEngage
  • Strong understanding of e-commerce KPIs and customer metrics
  • Ability to communicate insights clearly to non-technical stakeholders

What We Offer

  • Greenfield opportunity to build the data & insights platform from scratch
  • High business impact across multiple functions
  • End-to-end exposure from analytics to automation and applied modelling
  • Fast-paced, transparent, and collaborative work culture


Read more
Remote only
3 - 8 yrs
₹20L - ₹30L / yr
ETL
Google Cloud Platform (GCP)
skill iconPython
Pipeline management
BigQuery

About Us:


CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Job Summary:


We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities:


  • ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
  • Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
  • Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
  • Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 
  • API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
  • Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
  • Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
  • Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.

Qualifications and Skills:


  • Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
  • Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
  • Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
  • Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:


  • Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
  • Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
  • Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
  • Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
  • Experience with data validation techniques and tools.
  • Familiarity with CI/CD practices and the ability to work in an Agile framework.
  • Strong problem-solving skills and keen attention to detail.


Preferred Qualifications:


  • Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
  • Familiarity with similar large-scale public dataset integration initiatives.
  • Experience with multilingual data integration.
Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Remote only
9 - 18 yrs
₹5L - ₹29L / yr
skill iconPython
SQL
NOSQL Databases
DBA

Job Summary


We are looking for an experienced Python DBA with strong expertise in Python scripting and SQL/NoSQL databases. The candidate will be responsible for database administration, automation, performance optimization, and ensuring availability and reliability of database systems.


Key Responsibilities

  • Administer and maintain SQL and NoSQL databases
  • Develop Python scripts for database automation and monitoring
  • Perform database performance tuning and query optimization
  • Manage backups, recovery, replication, and high availability
  • Ensure data security, integrity, and compliance
  • Troubleshoot and resolve database-related issues
  • Collaborate with development and infrastructure teams
  • Monitor database health and performance
  • Maintain documentation and best practices


Required Skills

  • 10+ years of experience in Database Administration
  • Strong proficiency in Python
  • Experience with SQL databases (PostgreSQL, MySQL, Oracle, SQL Server)
  • Experience with NoSQL databases (MongoDB, Cassandra, etc.)
  • Strong understanding of indexing, schema design, and performance tuning
  • Good analytical and problem-solving skills


Read more
Bengaluru (Bangalore)
6 - 12 yrs
₹12L - ₹45L / yr
skill iconJava
skill iconPython
skill iconReact.js
skill iconNodeJS (Node.js)

We are seeking an experienced Engineering Leader to drive the design and delivery of secure, scalable, and high-performance financial platforms. This role requires strong technical leadership, people management skills, and deep understanding of FinTech systems, compliance, and reliability.


 Key Responsibilities

  • Lead multiple engineering teams delivering FinTech platforms (payments, lending, banking, wallets, trading, or risk systems)
  • Own architecture and system design for high-availability, low-latency, secure systems
  • Partner with Product, Compliance, Risk, and Business teams to translate financial requirements into technical solutions
  • Ensure adherence to security standards, regulatory compliance (PCI-DSS, SOC2, ISO), and data privacy
  • Drive best practices in coding, testing, DevOps, observability, and system resilience
  • Build, mentor, and retain high-performing engineering teams
  • Oversee sprint planning, delivery timelines, and stakeholder communication
  • Lead incident response, root cause analysis, and platform stability improvements


Required Skills & Qualifications

  • 4+ years in leadership roles
  • Strong hands-on expertise in Java / Node.js / Python / .NET / Go
  • Experience building FinTech platforms — payments, banking, lending, trading, or risk systems
  • Deep knowledge of distributed systems, microservices, APIs, databases, and cloud (AWS/Azure/GCP)
  • Strong understanding of security, fraud prevention, and regulatory compliance
  • Experience working in Agile/Scrum environments
  • Excellent stakeholder and people management skills


Read more
OpsTree Solutions

at OpsTree Solutions

4 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
4yrs+
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconAmazon Web Services (AWS)
EKS
skill iconKubernetes
DevOps
+3 more

Key Responsibilities:

  • Lead the architecture, design, and implementation of scalable, secure, and highly available AWS infrastructure leveraging services such as VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, and Secrets Manager.
  • Develop and maintain reusable, modular IaC frameworks using Terraform and Terragrunt, and mentor team members on IaC best practices.
  • Drive automation of infrastructure provisioning, deployment workflows, and routine operations through advanced Python scripting.
  • Take ownership of cost optimization strategy by analyzing usage patterns, identifying savings opportunities, and implementing guardrails across multiple AWS environments.
  • Define and enforce infrastructure governance, including secure access controls, encryption policies, and secret management mechanisms.
  • Collaborate cross-functionally with development, QA, and operations teams to streamline and scale CI/CD pipelines for containerized microservices on Kubernetes (EKS).
  • Establish monitoring, alerting, and observability practices to ensure platform health, resilience, and performance.
  • Serve as a technical mentor and thought leader, guiding junior engineers and shaping cloud adoption and DevOps culture across the organization.
  • Evaluate emerging technologies and tools, recommending improvements to enhance system performance, reliability, and developer productivity.
  • Ensure infrastructure complies with security, regulatory, and operational standards, and drive initiatives around audit readiness and compliance.

Mandatory Skills & Experience:

  • AWS (Advanced Expertise): VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, Secrets Management
  • Infrastructure as Code: Extensive experience with Terraform and Terragrunt, including module design and IaC strategy
  • Strong hold in Kubernetes
  • Scripting & Automation: Proficient in Python, with a strong track record of building tools, automating workflows, and integrating cloud services
  • Cloud Cost Optimization: Proven ability to analyze cloud spend and implement sustainable cost control strategies
  • Leadership: Experience in leading DevOps/infrastructure teams or initiatives, mentoring engineers, and making architecture-level decisions

Nice to Have:

  • Experience designing or managing CI/CD pipelines for Kubernetes-based environments
  • Backend development background in Python (e.g., FastAPI, Flask)
  • Familiarity with monitoring/observability tools such as Prometheus, Grafana, CloudWatch
  • Understanding of system performance tuning, capacity planning, and scalability best practices
  • Exposure to compliance standards such as SOC 2, HIPAA, or ISO 27001
Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹27L / yr (Varies
)
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+6 more

Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.

We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.

The Role

We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.

This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.

You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.


What you’ll do

● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python

● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems

● Model data using dbt, with an emphasis on quality, reuse, and documentation

● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics

● Write high-performance SQL and support analytics and reporting teams in self-serve data access

● Monitor and improve data quality, lineage, and governance across critical workflows

● Collaborate with engineers, analysts, and business partners across the US, UK, and India


What You Bring

● 4+ years of data engineering experience, ideally in a global, distributed team

● Strong Python development skills and experience

● Expert in SQL for data transformation, analysis, and debugging

● Deep knowledge of Airflow and orchestration best practices

● Proficient in DBT (data modeling, testing, release workflows)

● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus

● Strong grasp of data governance, observability, and privacy standards

● Excellent written and verbal communication skills


Nice to have

● Experience working with digital marketing and performance data, including:

Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)

● Familiarity with BI tools like Tableau or Looker

● Exposure to attribution models, media mix modeling, or A/B testing infrastructure

● Collaboration experience with data scientists or machine learning workflows


Why Join Us

● Monthly long weekends — every third Friday off

● Wellness reimbursement to support your health and balance

● Paid parental leave

● Remote-first with flexibility and trust

● Work with a world-class data and marketing team inside a globally recognized brand

Read more
Ethara AI
Usha Pandey
Posted by Usha Pandey
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹10L - ₹12L / yr
skill iconPython
skill iconJavascript
RESTful APIs

Company: Ethara AI

Location: Gurgaon (Work From Office)

Employment Type: Full-Time

Experience Required: 2–4 Years

Open Roles: Software Engineers (Python Fullstack)


About Us

Ethara AI is a leading AI and data services company in India, specializing in building high-quality, domain-specific datasets for Large Language Model (LLM) fine-tuning. Our work bridges the gap between academic learning and real world AI applications, and we are committed to nurturing the next generation of AI professionals.


Role Overview:-

We are looking for experienced Python Fullstack Software Engineers who can contribute to post training AI development workflows with strong proficiency in coding tasks and evaluation logic. This role involves working on high-impact AI infrastructure projects, including but not limited to:

 Code generation, validation, and transformation across Python, Java, JavaScript, and modern frameworks;

 Evaluation and improvement of model-generated code responses;

 Designing and verifying web application features, APIs, and test cases used in AI model alignment;

 Interpreting and executing task specifications to meet rigorous quality benchmarks;

 Collaborating with internal teams to meet daily throughput and quality targets within a structured environment.

Key Responsibilities:-

 Work on fullstack engineering tasks aligned with LLM post-training workflows;

 Analyze model-generated outputs for correctness, coherence, and adherence to task requirements;

 Write, review, and verify application logic and coding prompts across supported languages and frameworks;

 Maintain consistency, quality, and efficiency in code-focused deliverables;

 Engage with leads and PMs to meet productivity benchmarks (8–9 working hours daily);

 Stay updated with AI development standards and contribute to refining internal engineering processes.


Technical Skills Required:-

 Strong proficiency in Python and nice to have: Java, Node.js;

 Strong experience in frontend technologies: React.js, HTML/CSS, TypeScript;

 Familiarity with REST APIs, testing frameworks, and Git-based workflows;

 Ability to analyze, debug, and rewrite logic for correctness and clarity;

 Good understanding of model response evaluation and instruction-based coding logic

Qualifications:-

 Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;

 2–4 years of experience in a software development role (Fullstack preferred);

 Prior exposure to AI/LLM environments or code-based evaluation tasks is a plus;

 Excellent written communication and logical reasoning abilities;

 Comfortable working from office in Gurgaon and committing to 8–9 hours of productive work daily


Why Join Us

 Be part of a high-growth team at the forefront of LLM post-training development;

 Work on real-world AI engineering problems with production-grade impact;

 Competitive compensation with performance-driven growth opportunities;

 Structured workflow, collaborative culture, and technically challenging projects


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad, Indore, Ahmedabad
7 - 11 yrs
Upto ₹30L / yr (Varies
)
SQL
Snowflake
Airflow
skill iconPython

About Kanerika:

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.


We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


Awards and Recognitions:

Kanerika has won several awards over the years, including:

1. Best Place to Work 2023 by Great Place to Work®

2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today

3. NASSCOM Emerge 50 Award in 2014

4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture

5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.


Working for us:

Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.


Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.


Role Responsibilities: 

Following are high level responsibilities that you will play but not limited to: 

  • Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
  • Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
  • Enable business analytics and self-service reporting through Power BI and other visualization tools.
  • Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
  • Implement and enforce best practices for data governance, data quality, and security.
  • Mentor and guide junior data engineers; establish coding and design standards.
  • Evaluate emerging technologies and tools to continuously improve the data ecosystem.


Required Qualifications:

  • Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
  • Bachelor’s/ Master’s degree in Computer Science, Information Technology, Engineering, or related field.
  • 7+ years of experience in data engineering or data platform development
  • Strong hands-on experience in SQL, Snowflake, Python, and Airflow
  • Solid understanding of data modeling, data governance, security, and CI/CD practices.

Preferred Qualifications:

  • Familiarity with data modeling techniques and practices for Power BI.
  • Knowledge of Azure Databricks or other data processing frameworks.
  • Knowledge of Microsoft Fabric or other Cloud Platforms.


What we need?

· B. Tech computer science or equivalent.


Why join us?

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills.
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company.
  • Recognized for excellence in data and AI solutions with industry awards and accolades.


Employee Benefits:

1. Culture:

  • Open Door Policy: Encourages open communication and accessibility to management.
  • Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  • Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  • Employee Referral Bonus: Rewards employees for referring qualified candidates.
  • Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  • Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  • Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  • GMC and Term Insurance: Offers medical coverage and financial protection.
  • Health Insurance: Provides coverage for medical expenses.
  • Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  • Company-sponsored family events: Creates opportunities for employees and their families to bond.
  • Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  • Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  • Company-sponsored outings: Organizes recreational activities for employees.
  • Gratuity: Provides a monetary benefit as a token of appreciation.
  • Provident Fund: Helps employees save for retirement.
  • Generous PTO: Offers more than the industry standard for paid time off.
  • Paid sick days: Allows employees to take paid time off when they are unwell.
  • Paid holidays: Gives employees paid time off for designated holidays.
  • Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  • L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  • Mentorship Program: Offers guidance and support from experienced professionals.
  • Job Training: Provides training to enhance job-related skills.
  • Professional Certification Reimbursements: Assists employees in obtaining professional   certifications.
  • Promote from Within: Encourages internal growth and advancement opportunities.
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹14L - ₹20L / yr
skill iconPython
Mainframe
skill iconC#
SDET
Test Automation (QA)
+37 more

Job Details

Job Title: Java Full Stack Developer 

Industry: Global digital transformation solutions provider

- Domain: Information technology (IT)

Experience Required: 5-7 years

Working Mode: 3 days in office, Hybrid model.

Job Location: Bangalore

CTC Range: Best in Industry


Job Description:

SDET (Software Development Engineer in Test)


Job Responsibilities:

• Test Automation: • Develop, maintain, and execute automated test scripts using test automation frameworks. • Design and implement testing tools and frameworks to support automated testing.

• Software Development: • Participate in the design and development of software components to improve testability. • Write code actively, contribute to the development of tools, and work closely with developers to debunk complex issues.

• Quality Assurance: • Collaborate with the development team to understand software features and technical implementations. • Develop quality assurance standards and ensure adherence to the best testing practices.

• Integration Testing: • Conduct integration and functional testing to ensure that components work as expected individually and when combined.

• Performance and Scalability Testing: • Perform performance and scalability testing to identify bottlenecks and optimize application performance. • Test Planning and Execution: • Create detailed, comprehensive, and well-structured test plans and test cases. • Execute manual and/or automated tests and analyze results to ensure product quality.

• Bug Tracking and Resolution: • Identify, document, and track software defects using bug tracking tools. • Verify fixes and work closely with developers to resolve issues. • Continuous Improvement: • Stay updated on emerging tools and technologies relevant to the SDET role. • Constantly look for ways to improve testing processes and frameworks.


Skills and Qualifications: • Strong programming skills, particularly in languages such as COBOL, JCL, Java, C#, Python, or JavaScript. • Strong experience in Mainframe environments. • Experience with test automation tools and frameworks like Selenium, JUnit, TestNG, or Cucumber. • Excellent problem-solving skills and attention to detail. • Familiarity with CI/CD tools and practices, such as Jenkins, Git, Docker, etc. • Good understanding of web technologies and databases is often beneficial. • Strong communication skills for interfacing with cross-functional teams.


Qualifications • 5+ years of experience as a software developer, QA Engineer, or SDET. • 5+ years of hands-on experience with Java or Selenium. • 5+ years of hands-on experience with Mainframe environments. • 4+ years designing, implementing, and running test cases. • 4+ years working with test processes, methodologies, tools, and technology. • 4+ years performing functional and UI testing, quality reporting. • 3+ years of technical QA management experience leading on and offshore resources. • Passion around driving best practices in the testing space. • Thorough understanding of Functional, Stress, Performance, various forms of regression testing and mobile testing. • Knowledge of software engineering practices and agile approaches. • Experience building or improving test automation frameworks. • Proficiency CICD integration and pipeline development in Jenkins, Spinnaker or other similar tools. • Proficiency in UI automation (Serenity/Selenium, Robot, Watir). • Experience in Gherkin (BDD /TDD). • Ability to quickly tackle and diagnose issues within the quality assurance environment and communicate that knowledge to a varied audience of technical and non-technical partners. • Strong desire for establishing and improving product quality. • Willingness to take challenges head on while being part of a team. • Ability to work under tight deadlines and within a team environment. • Experience in test automation using UFT and Selenium. • UFT/Selenium experience in building object repositories, standard & custom checkpoints, parameterization, reusable functions, recovery scenarios, descriptive programming and API testing. • Knowledge of VBScript, C#, Java, HTML, and SQL. • Experience using GIT or other Version Control Systems. • Experience developing, supporting, and/or testing web applications. • Understanding of the need for testing of security requirements. • Ability to understand API – JSON and XML formats with experience using API testing tools like Postman, Swagger or SoapUI. • Excellent communication, collaboration, reporting, analytical and problem-solving skills. • Solid understanding of Release Cycle and QA /testing methodologies • ISTQB certification is a plus.


Skills: Python, Mainframe, C#

Notice period - 0 to 15days only

Read more
ByteFoundry AI

at ByteFoundry AI

4 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
3 - 8 yrs
Upto ₹40L / yr (Varies
)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
+3 more

About the Role

We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-

quality code delivery.

Responsibilities

- Develop and maintain full-stack applications.

- Implement clean, maintainable, and efficient code.

- Collaborate with designers, product managers, and backend engineers.

- Participate in code reviews and debugging.

- Work with REST APIs/GraphQL.

- Contribute to CI/CD pipelines.

- Ability to work independently as well as within a collaborative team environment.


Required Technical Skills

- Strong knowledge of JavaScript/TypeScript.

- Experience with React.js, Next.js.

- Backend experience with Node.js, Express, NestJS.

- Understanding of SQL/NoSQL databases.

- Experience with Git, APIs, debugging tools.ß

- Cloud familiarity (AWS/GCP/Azure).

AI and System Mindset

Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.

Soft Skills

- Strong problem-solving ability.

- Good communication and teamwork.

- Fast learner and adaptable.

Education

Bachelor's degree in Computer Science / Engineering or equivalent.

Read more
IT Services Company

IT Services Company

Agency job
Kochi (Cochin)
8 - 12 yrs
₹15L - ₹30L / yr
Microservices
skill iconPython
RESTful APIs


Key Responsibilities

• Design and architect robust, scalable, and secure software solutions.

• Providing technical advice by evaluating new technologies and products to determine feasibility and desirability of the current business environment and to detect critical deficiencies and recommend

solutions.

• Supervising and reviewing technology diagnosis and assessment activities.

• Working with the project managers to define the scope and cost estimation.

• Collaborate closely with product managers, developers, and stakeholders to align technical solutions with business objectives.

• Provide leadership and mentorship to the development team, fostering a culture of innovation and excellence.

• Evaluate and recommend tools, technologies, and frameworks to optimize product performance and development.

• Oversee the end-to-end technical implementation of projects, ensuring high-quality deliverables within defined timelines.

• Establish the best practices for software development, deployment, and maintenance.

• Stay updated on emerging trends in software architecture and maritime technology to integrate industry best practices.

Required Skills and Qualifications

• Bachelor’s or master’s degree in computer science, Engineering, or a related field.

• Proven experience (8+ years) in software architecture and design with a focus on Python

• Strong proficiency in web-based application development and cloud computing technologies.

• Expertise in modern architecture frameworks, microservices, and RESTful API design.

• Excellent communication and interpersonal skills, with the ability to convey technical concepts to

non-technical stakeholders.

• Strong problem-solving skills and a proactive approach to addressing challenges.


Read more
MIC Global

at MIC Global

3 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
2yrs+
Upto ₹15L / yr (Varies
)
skill iconPython
FastAPI
skill iconFlask
skill iconDjango
SQL
+4 more

About the Role 

We're seeking a Python Backend Developer to join our insurtech analytics team. This role focuses on developing backend APIs, automating insurance reporting processes, and supporting data analysis tools. You'll work with insurance data, build REST APIs, and help streamline operational workflows through automation. 


Key Responsibilities 

  • Automate insurance reporting processes including bordereaux, reconciliations, and data extraction from various file formats 
  • Support and maintain interactive dashboards and reporting tools for business stakeholders 
  • Develop Python scripts and applications for data processing, validation, and transformation 
  • Develop and maintain backend APIs using FastAPI or Flask 
  • Perform data analysis and generate insights from insurance datasets 
  • Automate recurring analytical and reporting tasks 
  • Work with SQL databases to query, analyze, and extract data 
  • Collaborate with business users to understand requirements and deliver solutions 
  • Document code, processes, and create user guides for dashboards and tools 
  • Support data quality initiatives and implement validation checks 

Requirements 

Essential 

  • 2+ years of Python development experience 
  • Strong knowledge of Python libraries: Pandas, NumPy for data manipulation 
  • Experience building web applications or dashboards with Python frameworks 
  • Knowledge of FastAPI or Flask for building backend APIs and applications 
  • Proficiency in SQL and working with relational databases 
  • Experience with data visualization libraries (Matplotlib, Plotly, Seaborn) 
  • Ability to work with Excel, CSV, and other data file formats 
  • Strong problem-solving and analytical thinking skills 
  • Good communication skills to work with non-technical stakeholders 

Desirable 

  • Experience in insurance or financial services industry 
  • Familiarity with insurance reporting processes (bordereaux, reconciliations, claims data) 
  • Experience with Azure cloud services (Azure Functions, Blob Storage, SQL Database) 
  • Experience with version control systems (Git, GitHub, Azure DevOps) 
  • Experience with API development and RESTful services 

Tech Stack 

Python 3.x, FastAPI, Flask, Pandas, NumPy, Plotly, Matplotlib, SQL Server, MS Azure, Git, Azure DevOps, REST APIs, Excel/CSV processing libraries 

Read more
IntegriMart

IntegriMart

Agency job
Pune
5 - 8 yrs
₹12L - ₹15L / yr
Manual testing
Automation
skill iconPython
playwright
skill iconAmazon Web Services (AWS)
+2 more

Hope you are doing great!

We have an Urgent opening for a Senior Automation QA professional to join a global life sciences data platform company. Immediate interview slots available.


🔹 Quick Role Overview

  • Role: Senior Automation QA
  • Location: Pune(Hybrid -3 days work from office)
  • Employment Type: Full-Time
  • Experience Required: 5+ Years
  • Interview Process: 2–3 Rounds
  • Qualification: B.E / B.Tech
  • Notice Period : 0-30 Days


📌 Job Description

IntegriChain is the data and business process platform for life sciences manufacturers, delivering visibility into patient access, affordability, and adherence. The platform enables manufacturers to drive gross-to-net savings, ensure channel integrity, and improve patient outcomes.

We are expanding our Engineering team to strengthen our ability to process large volumes of healthcare and pharmaceutical data at enterprise scale.

The Senior Automation QA will be responsible for ensuring software quality by designing, developing, and maintaining automated test frameworks. This role involves close collaboration with engineering and product teams, ownership of test strategy, mentoring junior QA engineers, and driving best practices to improve product reliability and release efficiency.


🎯 Key Responsibilities

  • Hands-on QA across UI, API, and Database testing – both Automation & Manual
  • Analyze requirements, user stories, and technical documents to design detailed test cases and test data
  • Design, build, execute, and maintain automation scripts using BDD (Gherkin), Pytest, and Playwright
  • Own and maintain QA artifacts: Test Strategy, BRD, defect metrics, leakage reports, quality dashboards
  • Work with stakeholders to review and improve testing approaches using data-backed quality metrics
  • Ensure maximum feasible automation coverage in every sprint
  • Perform functional, integration, and regression testing in Agile & DevOps environments
  • Drive Shift-left testing, identifying defects early and ensuring faster closures
  • Contribute to enhancing automation frameworks with minimal guidance
  • Lead and mentor a QA team (up to 5 members)
  • Support continuous improvement initiatives and institutionalize QA best practices
  • Act as a problem-solver and strong team collaborator in a fast-paced environment


🧩 Desired Skills & Competencies

✅ Must-Have:

  • 5+ years of experience in test planning, test case design, test data preparation, automation & manual testing
  • 3+ years of strong UI & API automation experience using Playwright with Python
  • Solid experience in BDD frameworks (Gherkin, Pytest)
  • Strong database testing skills (Postgres / Snowflake / MySQL / RDS)
  • Hands-on experience with Git and Jenkins (DevOps exposure)
  • Working experience with JMeter
  • Experience in Agile methodologies (Scrum / Kanban)
  • Excellent problem-solving, analytical, communication, and stakeholder management skills

👍 Good to Have:

  • Experience testing AWS / Cloud-hosted applications
  • Exposure to ETL processes and BI reporting systems


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹60L - ₹80L / yr
Apache Airflow
Apache Spark
MLOps
AWS CloudFormation
DevOps
+19 more

Review Criteria:

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred:

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria:

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities:

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate:

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field. 


Read more
Palcode.ai

at Palcode.ai

2 candid answers
Team Palcode
Posted by Team Palcode
Remote only
1 - 2 yrs
₹2L - ₹5L / yr
skill iconPython
skill iconReact.js
skill iconAmazon Web Services (AWS)

Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems


Role: Full Stack Developer

Industry Type: Software Product

Department: Engineering - Software & QA

Employment Type: Full Time, Permanent

Role Category: Software Development

Education

UG: Any Graduate

Read more
Alpheva AI
Ramakant gupta
Posted by Ramakant gupta
Remote only
1 - 3 yrs
₹10L - ₹25L / yr
skill iconReact Native
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPython
skill iconPostgreSQL

About the Role

We’re hiring a Full Stack Engineer who can own features end to end, from UI to APIs to data models.

This is not a “ticket executor” role. You’ll work directly with product, AI, and founders to shape how users interact with intelligent financial systems.

If you enjoy shipping real features, fixing real problems, and seeing users actually use what you built, this role is for you.


What You Will Do

  • Build and ship frontend features using React, Next.js, and React Native
  • Develop backend services and APIs using Python and/or Golang
  • Own end-to-end product flows like onboarding, dashboards, insights, and AI conversations
  • Integrate frontend with backend and AI services (LLMs, tools, data pipelines)
  • Design and maintain PostgreSQL schemas, queries, and migrations
  • Ensure performance, reliability, and clean architecture across the stack
  • Collaborate closely with product, AI, and design to ship fast and iterate
  • Debug production issues and continuously improve UX and system quality


What We’re Looking For

  • 2 to 3+ years of professional full stack engineering experience
  • Strong hands-on experience with React, Next.js, and React Native
  • Backend experience with Python and/or Golang in production
  • Solid understanding of PostgreSQL, APIs, and system design
  • Strong fundamentals in HTML, CSS, TypeScript, and modern frontend patterns
  • Ability to work independently and take ownership in a startup environment
  • Product-minded engineer who thinks in terms of user outcomes, not just code
  • B.Tech in Computer Science or related field


Nice to Have

  • Experience with fintech, dashboards, or data-heavy products
  • Exposure to AI-powered interfaces, chat systems, or real-time data
  • Familiarity with cloud platforms like AWS or GCP
  • Experience handling sensitive or regulated data


Why Join Alpheva AI

  • Build real product used by real users from day one
  • Work directly with founders and influence core product decisions
  • Learn how AI-native fintech products are built end to end
  • High ownership, fast execution, zero corporate nonsense
  • Competitive compensation with meaningful growth upside


Read more
Sim Gems Group

at Sim Gems Group

4 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 10 yrs
Upto ₹25L / yr (Varies
)
skill iconPython
Odoo (OpenERP)
SQL
skill iconKubernetes
Data Structures

Employment Type: Full-time, Permanent

Location: Near Bommasandra Metro Station, Bangalore (Work from Office – 5 days/week)

Notice Period: 15 days or less preferred


About the Company:

SimStar Asia Ltd is a joint vendor of the SimGems and StarGems Group — a Hong Kong–based multinational organization engaged in the global business of conflict-free, high-value diamonds.

SimStar maintains the highest standards of integrity. Any candidate found engaging in unfair practices at any stage of the interview process will be disqualified and blacklisted.


Experience Required

  • 4+ years of relevant professional experience.

Key Responsibilities

  • Hands-on backend development using Python (mandatory).
  • Write optimized and complex SQL queries; perform query tuning and performance optimization.
  • Work extensively with the Odoo framework, including development and deployment.
  • Manage deployments using Docker and/or Kubernetes.
  • Develop frontend components using OWL.js or any modern JavaScript framework.
  • Design scalable systems with a strong foundation in Data Structures, Algorithms, and System Design.
  • Handle API integrations and data exchange between systems.
  • Participate in technical discussions and architecture decisions.

Interview Expectations

  • Candidates must be comfortable writing live code during interviews.
  • SQL queries and optimization scenarios will be part of the technical assessment.

Must-Have Skills

  • Python backend development
  • Advanced SQL
  • Odoo Framework & Deployment
  • Docker / Kubernetes
  • JavaScript frontend (OWL.js preferred)
  • System Design fundamentals
  • API integration experience


Read more
Cere Labs
Devesh Rajadhyax
Posted by Devesh Rajadhyax
Mumbai
0 - 1 yrs
₹3L - ₹4L / yr
skill iconPython
skill iconJavascript
skill iconReact.js
MySQL
PyCharm

About us


Cere Labs is a Mumbai based company working in the field of Artificial Intelligence. It is a product company that utilizes the latest technologies such as Python, Redis, neo4j, MVC, Docker, Kubernetes to build its AI platform. Cere Labs’ clients are primarily from the Banking and Finance domain in India and US. The company has a great environment for its employees to learn and grow in technology.


Software Developer


Job brief


Cere Labs is seeking to hire a skilled and passionate software developer to help with the development of our current projects and product. Your duties will primarily revolve around building software by writing code, as well as modifying software to fix errors, improve its performance. You will also be involved in writing of the test cases and testing


To be successful in this role, you will need extensive knowledge of programming languages like Java, Python, Java Script, React. 


Ultimately, the role of the Software Engineer is to build high-quality, innovative and fully performing software that complies with coding standards and technical design



Responsibilities


  • Develop flowcharts, layouts and documentation to identify requirements and solutions
  • Write well-designed, testable code
  • Develop software verification plans and quality assurance procedures
  • Document and maintain software functionality
  • Troubleshoot, debug and upgrade existing systems
  • Deploy programs and test the deployed code
  • Comply with project plans and industry standards


Requirements


  • BE (CS/IT) degree in Computer Science
  • Ability to understand the requirements given and generate the design based on specification given.
  • Ability to develop unit testing of code components or complete applications.
  • Must be a full-stack developer and understand concepts of software engineering.
  • Ability to develop software in Python, Java, Java Script 
  • Excellent knowledge of relational databases, MySQL and ORM technologies (JPA2, Hibernate), in-memory data stores such as Redis
  • Experience developing web applications using at least one popular web framework (JSF, Spring MVC, React) is preferred
  • Experience with test-driven development
  • Proficiency in software engineering tools including popular IDE’s such as PyCharm, Visual Studio Code and Eclipse
  • Proven work experience as a Software Engineer or Software Developer will be an added advantage



Working conditions


Hours: 9:00 AM to 6:00 PM

Weekly off: Sunday, First and Third Saturdays

Mode: Work from office


Recruitment process


The selection process includes:

  1. Written test
  2. Technical interview
  3. Final interview


Compensation


CTC: Rs. 3-4 lacs pa, depending on performance in the selection process.



Read more
Snabbit
Shweta Vyas
Posted by Shweta Vyas
Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
skill iconPython
skill iconJava
skill iconGo Programming (Golang)
Mobile App Development
SQL
+1 more

About Snabbit: Snabbit is India’s first Quick-Service App, delivering home services in just 15 minutes through a hyperlocal network of trained and verified professionals. Backed by Nexus Venture Partners (investors in Zepto, Unacademy, and Ultrahuman), Snabbit is redefining convenience in home services with quality and speed at its core. Founded by Aayush Agarwal, former Chief of Staff at Zepto, Snabbit is pioneering the Quick-Commerce revolution in services. In a short period, we’ve completed thousands of jobs with unmatched customer satisfaction and are scaling rapidly.

At Snabbit, we don’t just build products—we craft solutions that transform everyday lives. This is a playground for engineers who love solving complex problems, building systems from the ground up, and working in a fast-paced, ownership-driven environment. You’ll work alongside some of the brightest minds, pushing boundaries and creating meaningful impact at scale.

Responsibilities: ● Design, implement, and maintain backend services and APIs

● Develop and architect complex UI features for iOS and Android apps using Flutter

● Write high-quality, efficient, and maintainable code, adhering to industry best practices.

● Participate in design discussions to develop scalable solutions and implement them.

● Take ownership of feature delivery timelines and coordinate with cross-functional teams

● Troubleshoot and debug issues to ensure smooth system operations. ● Design, develop, and own end-to-end features for in-house software and tools

● Optimize application performance and implement best practices for mobile development

● Deploy and maintain services infrastructure on AWS. Requirements: ● Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field.

● Experience: ○ 3-5 years of hands-on experience as a full-stack developer.

○ Expertise in developing backend services and mobile applications.

○ Experience in leading small technical projects or features

○ Proven track record of delivering complex mobile applications to production

● Technical Skills:

○ Strong knowledge of data structures, algorithms, and design patterns. ○ Proficiency in Python and Advanced proficiency in Flutter with deep understanding of widget lifecycle and state management

○ Proficiency in RESTful APIs and microservices architecture ○ Knowledge of mobile app deployment processes and app store guidelines

○ Familiarity with version control systems (Git) and agile development methodologies

○ Experience with AWS or other relevant cloud technologies

○ Experience with databases (SQL, NoSQL) and data modeling

● Soft Skills:

○ Strong problem-solving and debugging abilities with ability to handle complex technical challenges and drive best practices within the team

○ Leadership qualities with ability to mentor and guide junior developers ○ Strong stakeholder management and client communication skills

○ A passion for learning and staying updated with technology trends.

Read more
Voiceoc

at Voiceoc

1 video
1 recruiter
Bisman Gill
Posted by Bisman Gill
Noida
5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconMachine Learning (ML)
skill iconPython
Large Language Models (LLM) tuning
SaaS
Team Management
+7 more

About Voiceoc

Voiceoc is a Delhi based health tech startup which was started with a vision to help healthcare companies round the globe by leveraging Voice & Text AI. We started our operations in August 2020 and today, the leading healthcare companies of US, India, Middle East & Africa leverage Voiceoc as a channel to communicate with thousands of patients on a daily basis.


Website: https://www.voiceoc.com/


Responsibilities Include (but not limited to):

We’re looking for a hands-on Chief Technology Officer (CTO) to lead all technology initiatives for Voiceoc’s US business.


This role is ideal for someone who combines strong engineering leadership with deep AI product-building experience — someone who can code, lead, and innovate at the same time.


The CTO will manage the engineering team, guide AI development, interface with clients for technical requirements, and ensure scalable, reliable delivery of all Voiceoc platforms.

Technical Leadership

  • Own end-to-end architecture, development, and deployment of Voiceoc’s AI-driven Voice & Text platforms.
  • Work closely with the Founder to define the technology roadmap, ensuring alignment with business priorities and client needs.
  • Oversee AI/ML feature development — including LLM integrations, automation workflows, and backend systems.
  • Ensure system scalability, data security, uptime, and performance across all active deployments (US Projects).
  • Collaborate with the AI/ML engineers to guide RAG pipelines, voicebot logic, and LLM prompt optimization.

Hands-On Contribution

  • Actively contribute to the core codebase (preferably Python/FastAPI/Node).
  • Lead by example in code reviews, architecture design, and debugging.
  • Experiment with LLM frameworks (OpenAI, Gemini, Mistral, etc.) and explore their applications in healthcare automation.

Product & Delivery Management

  • Translate client requirements into clear technical specifications and deliverables.
  • Oversee product versioning, release management, QA, and DevOps pipelines.
  • Collaborate with client success and operations teams to handle technical escalations, performance issues, and integration requests.
  • Drive AI feature innovation — identify opportunities for automation, personalization, and predictive insights.

Team Management

  • Manage and mentor an 8–10 member engineering team.
  • Conduct weekly sprint reviews, define coding standards, and ensure timely, high-quality delivery.
  • Hire and train new engineers to expand Voiceoc’s technical capability.
  • Foster a culture of accountability, speed, and innovation.

Client-Facing & Operational Ownership

  • Join client calls (US-based hospitals) to understand technical requirements or resolve issues directly.
  • Collaborate with the founder on technical presentations and proof-of-concept discussions.
  • Handle A–Z of tech operations for the US business — infrastructure, integrations, uptime, and client satisfaction.

Technical Requirements

Must-Have:

  • 5-7 years of experience in software engineering with at least 2+ years in a leadership capacity.
  • Strong proficiency in Python (FastAPI, Flask, or Django).
  • Experience integrating OpenAI / Gemini / Mistral / Whisper / LangChain.
  • Solid experience with AI/ML model integration, LLMs, and RAG pipelines.
  • Proven expertise in cloud deployment (AWS / GCP), Docker, and CI/CD.
  • Strong understanding of backend architecture, API integrations, and system design.
  • Experience building scalable, production-grade SaaS or conversational AI systems.
  • Excellent communication and leadership skills — capable of interfacing with both engineers and clients.

Good to Have (Optional):

  • Familiarity with telephony & voice tech stacks (Twilio, Exotel, Asterisk etc.).

What We Offer

  • Opportunity to lead the entire technology vertical for a growing global healthtech startup.
  • Direct collaboration with the Founder/CEO on strategy and innovation.
  • Competitive compensation — salary + meaningful equity stake.
  • Dynamic and fast-paced work culture with tangible impact on global healthcare.

Other Details

  • Work Mode: Hybrid - Noida (Office) + Home
  • Work Timing: US Hours
Read more
Unilog

at Unilog

3 candid answers
1 video
Bisman Gill
Posted by Bisman Gill
Remote, BLR, Mysore
8yrs+
Upto ₹52L / yr (Varies
)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Google Vertex AI
Agentic AI
PyTorch
+7 more

About Unilog

Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.

With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail. 

Unilog’ s Mission Statement

At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.


Designation:- AI Architect

Location: Bangalore/Mysore/Remote  

Job Type: Full-time  

Department: Software R&D  


About the Role  

We are looking for a highly motivated AI Architect to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vector Databases, AI Search, Agentic AI, Automation, and more.  

As an Architect, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.


Key Responsibilities  

Research & Experimentation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, and Automation. 


Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.


AI/ML Engineering: Design and develop AI/ML models, LLMs, embedding’s, and intelligent search capabilities leveraging state-of-the-art techniques. 


Vector & AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.  


Automation & AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.  


Collaboration & Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.


Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.  


Required Qualifications  


  1. 8-14 years of experience in AI/ML, software engineering, or a related field.  
  2. Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini.
  3. Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), and agentic AI.  
  4. Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.  
  5. Strong problem-solving skills and a passion for innovation.  
  6. Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.  


Preferred Qualifications  

  • Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.  
  • Knowledge of data pipelines, MLOps, and AI governance.  
  • Contributions to open-source AI/ML projects or published research papers.  


Why Join Us?  

  • Work on cutting-edge AI/ML innovations with the CTO Office.  
  • Influence the company’s future AI strategy and shape emerging technologies.  
  • Competitive compensation, growth opportunities, and a culture of continuous learning.    


About our Benefits:

Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, 401K match, career development, advancement opportunities, annual merit, pay-for-performance bonus eligibility, a generous time-off policy, and a flexible work environment.


Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class. 

Read more
Marble X
Manpreet Kaur
Posted by Manpreet Kaur
Mumbai
3 - 10 yrs
₹3L - ₹22L / yr
Shell Scripting
skill iconPython
MLOps
skill iconJenkins
skill iconGit
+4 more


Skills  - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance


Responsibilities:

1. Design, develop, and implement MLOps pipelines for the continuous deployment and

integration of machine learning models

2. Collaborate with data scientists and engineers to understand model requirements and

optimize deployment processes

3. Automate the training, testing and deployment processes for machine learning models

4. Continuously monitor and maintain models in production, ensuring optimal

performance, accuracy and reliability

5. Implement best practices for version control, model reproducibility and governance

6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness

7. Troubleshoot and resolve issues related to model deployment and performance

8. Ensure compliance with security and data privacy standards in all MLOps activities

9. Keep up to date with the latest MLOps tools, technologies and trends

10. Provide support and guidance to other team members on MLOps practices


Required skills and experience:

• 3-10 years of experience in MLOps, DevOps or a related field

• Bachelor’s degree in computer science, Data Science or a related field

• Strong understanding of machine learning principles and model lifecycle management

• Experience in Jenkins pipeline development

• Experience in automation scripting



Read more
Shipthis Inc

at Shipthis Inc

2 candid answers
Shariba Tasneem
Posted by Shariba Tasneem
Bengaluru (Bangalore)
0 - 1 yrs
₹8L - ₹9L / yr
skill iconPython
FastAPI

At Shipthis, we work to build a better future and make meaningful changes in the freight forwarding industry. Our team members aren't just employees. We are comprised of bright, skilled professionals with a single straightforward goal – to Evolve Freight forwarders towards


Digitalized operations, enhancing efficiency, and driving lasting change.

As a company, we're just the right size for every person to take initiative and make things happen. Join us in reshaping the future of logistics and be part of a journey where your contributions make a tangible difference.


Learn more at www.shipthis.co


Job Description

Who are we looking for?


We are seeking a skilled Developer who is experienced in Python with E2E project implementation to join our team. 


What will you be doing?


  • Design and develop backend services for the ERP system using Python and MongoDB
  • Collaborate with the frontend development team to integrate the frontend and backend functionalities
  • Develop and maintain APIs that are efficient, scalable, and secure
  • Write efficient and reusable code that can be easily maintained and updated
  • Optimize backend services to improve performance and scalability
  • Troubleshoot and resolve backend issues and bugs


Desired qualifications include


  • Bachelor’s degree in computer science or a related field
  • Proven experience in Python Fast API with E2E project implementation 
  • Proficiency with DevOps and Pipelines (Git actions, Google Cloud Platform)
  • Knowledge of microservices architecture
  • Experience in MongoDB development, including Aggregation 
  • Proficiency in RESTful API development
  • Experience with the Git version control system
  • Strong problem-solving and analytical skills
  • Ability to work in a fast-paced environment


We welcome candidates


  • Who is an Immediate Joiner
  • Female candidates returning to work after a career break are strongly encouraged to apply 
  • Whether you're seasoned or just starting out, if you have the skills and passion, we invite you to apply.


We are an equal-opportunity employer and are committed to fostering diversity and inclusivity. We do not discriminate based on race, religion, color, gender, sexual orientation, age, marital status, or disability status


JOB SYNOPSIS


  • Location: Bangalore
  • Job Type: Full-time
  • Role: Software Developer
  • Industry Type: Software Product
  • Functional Area: Software Development
  • Employment Type: Full-Time, Permanent
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort