Cutshort logo

50+ Python Jobs in Pune | Python Job openings in Pune

Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Pune, Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
Terraform
skill icon.NET
skill iconPython
+2 more

Job Description:


Position - Cloud Developer

Experience - 5 - 8 years

Location - Mumbai & Pune


Responsibilities:

  • Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
  • Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
  • Develop RESTful APIs and backend services aligned with modern architectural practices.
  • Apply object-oriented programming principles and design patterns to build scalable systems.
  • Build and maintain automated test frameworks and scripts to ensure high product quality.
  • Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
  • Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
  • Use Git and related version control practices effectively in a team-based development environment.
  • Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.


Skills:

  • 5+ years of experience
  • Experience with IaC Module
  • Terraform coding experience along with Terraform Module as a part of central platform team
  • Azure/GCP cloud experience is a must
  • Experience with C#/Python/Java Coding - is good to have


Read more
Wissen Technology
Pune, Mumbai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconPython
skill iconKubernetes
Shell Scripting
SRE Engineer
+1 more

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.



Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have




Read more
One of the reputed Client in India

One of the reputed Client in India

Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Hyderabad, Pune
6 - 8 yrs
₹12L - ₹13L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark

Our Client is looking to hire Databricks Amin immediatly.


This is PAN-INDIA Bulk hiring


Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.

Must have AWS


Notice 15-30 days is preferred.


Share profiles at hr at etpspl dot com

Please refer/share our email to your friends/colleagues who are looking for job.

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
4 - 6 yrs
Upto ₹23L / yr (Varies
)
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)
+1 more

As an L1/L2 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.


Key Responsibilities

  • Collaborate with business stakeholders to translate problem statements into data science tasks.
  • Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
  • Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
  • Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
  • Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
  • Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
  • Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
  • Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.

Required Skills & Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
  • 2.5–5 years of experience in data science, ML, or AI (projects and internships included).
  • Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
  • Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or strong interest with the ability to learn quickly.
  • Familiarity with SQL and structured data handling.
  • Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
  • Strong communication, problem-solving skills, and a proactive attitude.

Nice-to-Have (Not Mandatory)

  • Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
  • Knowledge of REST APIs and model integration into backend systems.
  • Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sonali RajeshKumar
Posted by Sonali RajeshKumar
Bengaluru (Bangalore), Pune, Mumbai
4 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Reliability engineering
skill iconPython
Shell Scripting

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few


Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have


Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
6 - 10 yrs
₹5L - ₹15L / yr
MERN Stack
skill iconPython
AWS
CI/CD
skill iconAmazon Web Services (AWS)

About Nirmitee.io

Nirmitee.io is a fast-growing product engineering and IT services company building world-class products across healthcare and fintech. We believe in engineering excellence, innovation, and long-term impact. As a Tech Lead, you’ll be at the core of this journey — driving execution, building scalable systems, and mentoring a strong team.


  • What You’ll DoLead by Example:

Be a hands-on contributor — writing clean, scalable, and production-grade code.

Review pull requests, set coding standards, and push for technical excellence.

  • Own Delivery:

Take end-to-end ownership of sprints, architecture, code quality, and deployment.

Collaborate with PMs and founders to scope, plan, and execute product features.

  • Build & Scale Teams:

Mentor and coach engineers to grow technically and professionally.

Foster a culture of accountability, transparency, and continuous improvement.

  • Drive Process & Best Practices:

Implement strong CI/CD pipelines, testing strategies, and release processes.

Ensure predictable and high-quality delivery across multiple projects.

  • Architect & Innovate:

Make key technical decisions, evaluate new technologies, and design system architecture for scale and performance.

Help shape the engineering roadmap with future-proof solutions.


  • What We’re Looking For10+ years of experience in software development with at least 2 years in a lead/mentorship role.
  • Strong hands-on expertise in MERN Stack or Python/Django/Flask, or GoLang/Java/Rust.
  • Experience with AWS / Azure / GCP, containerization (Docker/Kubernetes), and CI/CD pipelines.
  • Deep understanding of system design, architecture patterns, and performance optimization.
  • Proven track record of shipping products in fast-paced environments.
  • Strong communication, leadership, and ownership mindset.
  • Believes in delivering what is committed, has a show-up attitude


  • Nice to HaveExposure to healthcare or fintech domains.
  • Experience in building scalable SaaS platforms or open-source contributions.
  • Knowledge of security, compliance, and observability best practices.


  • Why Nirmitee.ioWork directly with the founding team and shape product direction.
  • Flat hierarchy — your ideas will matter.
  • Ownership, speed, and impact.
  • Opportunity to build something historical.
Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
7 - 10 yrs
₹10L - ₹18L / yr
full stack
skill iconReact.js
skill iconPython
skill iconGo Programming (Golang)
CI/CD
+9 more

Full-Stack Developer

Exp: 5+ years required

Night shift: 8 PM-5 AM/9PM-6 AM

Only Immediate Joinee Can Apply


We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.

Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.

Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.


Read more
Hunarstreet technologies pvt ltd

Hunarstreet technologies pvt ltd

Agency job
Chennai, Hyderabad, Bengaluru (Bangalore), Mumbai, Pune, Gurugram, Mohali, Panchkula
5 - 15 yrs
₹10L - ₹15L / yr
Fullstack Developer
Web Development
skill iconJavascript
TypeScript
skill iconGo Programming (Golang)
+5 more

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.


Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.


Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore), Pune, Hyderabad
6 - 12 yrs
₹5L - ₹28L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)

Job Description:

 

Role: Data Scientist

 

Responsibilities:

 

 Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

 Perform data cleaning, feature engineering, and exploratory data analysis.  

 

Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

 

Collaborate with other DS and engineers to deliver projects.

 

Technical Skills – Must have:

 

Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

 

Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

 

Ability to translate complex machine learning problem statements into specific deliverables and requirements.

 

Should have worked with major cloud platforms such as AWS, Azure or GCP.

 

Working knowledge of SQL and no-SQL databases.

 

Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

 

Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

 

Strong understanding of evaluation and monitoring metrics for machine learning projects.

Read more
Pune, Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹14L - ₹15L / yr
skill iconR Programming
skill iconPython
Scikit-Learn
TensorFlow
PyTorch
+8 more

Role: Data Scientist (Python + R Expertise)

Exp: 8 -12 Years

CTC: up to 30 LPA


Required Skills & Qualifications:

  • 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
  • Strong expertise in Python and R for data analysis, modeling, and visualization.
  • Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
  • Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
  • Experience with SQL and working with large-scale structured and unstructured data.
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
  • Excellent analytical, problem-solving, and communication skills.


Preferred Skills:

  • Experience with NLP, time series forecasting, or deep learning projects.
  • Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
  • Experience working in product or data-driven organizations.
  • Knowledge of MLOps and model lifecycle management is a plus.


If interested kindly share your updated resume on 82008 31681


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Nagpur, Ahmedabad, Jaipur, Kochi (Cochin)
3.6 - 8 yrs
₹4L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
AWS Lambda
+3 more

Job Summary:

Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.


Key Responsibilities:

  • Design, develop, and deploy backend services and APIs using Python.
  • Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
  • Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
  • Implement containerized environments using Docker and manage orchestration via Kubernetes.
  • Write automation and scripting solutions in Bash/Shell to streamline operations.
  • Work with relational databases like MySQL and SQL, including query optimization.
  • Collaborate directly with clients to understand requirements and provide technical solutions.
  • Ensure system reliability, performance, and scalability across environments.


Required Skills:

  • 3.5+ years of hands-on experience in Python development.
  • Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
  • Good understanding of Terraform or other Infrastructure as Code tools.
  • Proficient with Docker and container orchestration using Kubernetes.
  • Experience with CI/CD tools like Jenkins or GitHub Actions.
  • Strong command of SQL/MySQL and scripting with Bash/Shell.
  • Experience working with external clients or in client-facing roles.


Preferred Qualifications:

  • AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
  • Familiarity with Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder management abilities.


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune, Hyderabad
5 - 8 yrs
Upto ₹28L / yr (Varies
)
Apache Spark
skill iconScala
skill iconPython

Focus Areas:

  • Build applications and solutions that process and analyze large-scale data.
  • Develop data-driven applications and analytical tools.
  • Implement business logic, algorithms, and backend services.
  • Design and build APIs for secure and efficient data exchange.

Key Responsibilities:

  • Develop and maintain data processing applications using Apache Spark and Hadoop.
  • Write MapReduce jobs and complex data transformation logic.
  • Implement machine learning models and analytics solutions for business use cases.
  • Optimize code for performance and scalability; perform debugging and troubleshooting.
  • Work hands-on with Databricks for data engineering and analysis.
  • Design and manage Airflow DAGs for orchestration and automation.
  • Integrate and maintain CI/CD pipelines (preferably using Jenkins).

Primary Skills & Qualifications:

  • Strong programming skills in Scala and Python.
  • Expertise in Apache Spark for large-scale data processing.
  • Solid understanding of data structures and algorithms.
  • Proven experience in application development and software engineering best practices.
  • Experience working in agile and collaborative environments.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Tech Prescient

at Tech Prescient

3 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Pune
6 - 8 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconDjango
skill iconFlask
MySQL
skill iconPostgreSQL
+5 more

Job Position: Senior Technical Lead / Architect

Desired Skills: Python, Django, Flask, MySQL, PostgreSQL, Amazon Web Services, JavaScript, Identity Security, IGA, OAuth

Experience Range: 6 – 8 Years

Type: Full Time

Location: Pune, India


Job Description:


Tech Prescient is looking for an experienced and proven Technical Lead (Python/Django/Flask/FastAPI, React, and AWS/Azure Cloud) who has worked across the modern full stack to deliver scalable, secure software products and solutions. The ideal candidate should have experience leading from the front — handling customer interactions, mentoring teams, owning technical delivery, and ensuring the highest quality standards.


Key Responsibilities:

  • Lead the end-to-end design and development of applications using the Python stack (Django, Flask, FastAPI).
  • Architect and implement secure, scalable, and cloud-native solutions on AWS or Azure.
  • Drive technical discussions, architecture reviews, and ensure adherence to design and code quality standards.
  • Work closely with customers to translate business requirements into robust technical solutions.
  • Oversee development teams, manage delivery timelines, and guide sprint execution.
  • Design and implement microservices-based architectures and serverless deployments.
  • Build and integrate RESTful APIs and backend services; experience with Django Rest Framework (DRF) is a plus.
  • Responsible for infrastructure planning, deployment, and automation on AWS (ECS, Lambda, EC2, S3, RDS, CloudFormation, etc.).
  • Collaborate with cross-functional teams to ensure seamless delivery and continuous improvement.
  • Champion best practices in software security, CI/CD, and DevOps.
  • Provide technical mentorship to developers and lead project communications with clients and internal stakeholders.


Identity & Security Expertise:

  • Strong understanding of Identity and Access Management (IAM) principles and best practices.
  • Experience in implementing Identity Governance and Administration (IGA) solutions for user lifecycle management, access provisioning, and compliance.
  • Hands-on experience with OAuth 2.0, OpenID Connect, SAML, and related identity protocols for securing APIs and services.
  • Experience integrating authentication and authorization mechanisms within web and cloud applications.
  • Familiarity with Single Sign-On (SSO), MFA, and role-based access control (RBAC).
  • Exposure to AWS IAM, Cognito, or other cloud-based identity providers.
  • Ability to assess and enhance application security posture, ensuring compliance with enterprise identity and security standards.


Skills and Experience:

  • 6 – 8 years of hands-on experience in software design, development, and delivery.
  • Strong foundation in Python and related frameworks (Django, Flask, FastAPI).
  • Experience designing secure, scalable microservices and API architectures.
  • Good understanding of relational databases (MySQL, PostgreSQL).
  • Proven leadership, communication, and customer engagement skills.
  • Knowledge of Kubernetes is an added advantage.
  • Excellent problem-solving skills and passion for learning new technologies.
Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
7 - 9 yrs
Upto ₹32L / yr (Varies
)
skill iconPython
ETL
Data modeling
CI/CD
databricks
+2 more

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.

In this role, you’ll:

  • Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
  • Mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

  • Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
  • Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Collaborate with stakeholders to translate business requirements into technical solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architectural discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python and advanced SQL expertise.
  • Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
  • Experience with orchestration tools like Airflow (or similar).
  • Familiarity with CI/CD pipelines and Git.
  • Ability to debug, optimize, and scale data pipelines in production.

Good to Have

  • Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, quality frameworks, and observability.
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements, as needed.
Read more
Virtana

at Virtana

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
8 - 13 yrs
₹35L - ₹60L / yr
skill iconJava
Spring
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
+21 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gagandeep Kaur
Posted by Gagandeep Kaur
Bengaluru (Bangalore), Mumbai, Pune
4 - 7 yrs
Best in industry
skill iconPython
PySpark
pandas
Airflow
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.

Experience: 4-7 years

Notice Period: Immediate- 15 days

Location: Pune, Mumbai, Bangalore

Mode of Work: Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python and Pandas.
  • Implement and manage workflows using Airflow.
  • Utilize Azure Cloud Services for data storage and processing.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Optimize and scale data infrastructure to meet business needs.

Qualifications and Required Skills:

  • Proficiency in Python (Must Have).
  • Strong experience with Pandas (Must Have).
  • Expertise in Airflow (Must Have).
  • Experience with Azure Cloud Services.
  • Good communication skills.

Good to Have Skills:

  • Experience with Pyspark.
  • Knowledge of Kubernetes.

Wissen Sites:


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Hyderabad, Noida, Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Kochi (Cochin), Kolkata, Calcutta, Pune, Thiruvananthapuram, Trivandrum
7 - 15 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data Lake

SENIOR DATA ENGINEER:

ROLE SUMMARY:

Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.



RESPONSIBILITIES:

  • Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
  • Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
  • Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
  • Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
  • DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
  • Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
  • Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
  • Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
  • Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
  • Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.



REQUIRED QUALIFICATIONS:

  • Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
  •  Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
  • Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
  • ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
  • Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
  • DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
  • Serverless and events: Design event-driven distributed systems on AWS.
  • NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
  • AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.



NICE-TO-HAVE QUALIFICATIONS:

  • Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
  • Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
  • Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
  • Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.



OUTCOMES AND MEASURES:

  • Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
  • Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
  • Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
  • Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
  • Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.



LOCATION AND SCHEDULE:

●      Location: Outside US (OUS).

●      Schedule: Minimum 6 hours of overlap with US time zones.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Mumbai, Bengaluru (Bangalore), Pune
3 - 7 yrs
Best in industry
skill iconPython
pandas
PySpark

Experience: 3–7 Years

Locations: Pune / Bangalore / Mumbai

Notice Period :Immediate joiner only


Employment Type: Full-time

🛠️ Key Skills (Mandatory):

  • Python: Strong coding skills for data manipulation and automation.
  • PySpark: Experience with distributed data processing using Spark.
  • SQL: Proficient in writing complex queries for data extraction and transformation.
  • Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow


Interested candidates please share resume with details below.


Total Experience -

Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-

Current CTC -

Expected CTC -

Notice period -

Current Location -

Desired Location -


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹40L / yr
skill iconPython
pandas
Data engineering

Experience - 7+Yrs


Must-Have:

o Python (Pandas, PySpark)

o Data engineering & workflow optimization

o Delta Tables, Parquet

· Good-to-Have:

o Databricks

o Apache Spark, DBT, Airflow

o Advanced Pandas optimizations

o PyTest/DBT testing frameworks


Interested candidates can revert back with detail below.


Total Experience -

Relevant Experience in Python,Pandas.DE,Workflow optimization,delta table.-

Current CTC -

Expected CTC -

Notice Period -LWD -

Current location -

Desired location -



Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
skill iconPython
pandas
PySpark
SQL
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.

Experience:7+ years

Location:Pune, Mumbai, Bangalore

Mode of Work:Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python (Pandas, PySpark).
  • Optimize data workflows and ensure efficient data processing.
  • Work with Delta Tables and Parquet for data storage and management.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Implement best practices for data engineering and workflow optimization.

Qualifications and Required Skills:

  • Proficiency in Python, specifically with Pandas and PySpark.
  • Strong experience in data engineering and workflow optimization.
  • Knowledge of Delta Tables and Parquet.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a team environment.
  • Strong communication skills.

Good to Have Skills:

  • Experience with Databricks.
  • Knowledge of Apache Spark, DBT, and Airflow.
  • Advanced Pandas optimizations.
  • Familiarity with PyTest/DBT testing frameworks.

Wissen Sites:

 

Wissen | Driving Digital Transformation

A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.

 

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 5 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconFlask
Data Structures
Algorithms
+4 more

We're seeking an AI/ML Engineer to join our team-

As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.


Responsibilities

  • Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
  • AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
  • Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
  • Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
  • Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
  • Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
  • Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
  • Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
  • Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference

Requirements

  • Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
  • Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
  • Proficiency in programming languages commonly used for AI/ML. Preferably Python
  • Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
  • Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
  • Strong understanding of machine learning algorithms, statistics, and data structures
  • Experience with data preprocessing, data wrangling, and feature engineering
  • Knowledge of deep learning architectures, neural networks, and transfer learning
  • Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
  • Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
  • Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
  • Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Read more
Tata Consultancy Services
Bengaluru (Bangalore), Hyderabad, Pune, Delhi, Kolkata, Chennai
5 - 8 yrs
₹7L - ₹30L / yr
skill iconScala
skill iconPython
PySpark
Apache Hive
Spark
+3 more

Skills and competencies:

Required:

·        Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance

Data and macro-economic data to solve business problems.

·        Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in

Credit Risk/Banking

·        Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.

  • Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
  • Experience in systems integration, web services, batch processing
  • Experience in migrating codes to PySpark/Scala is big Plus
  • The ability to act as liaison conveying information needs of the business to IT and data constraints to the business

applies equal conveyance regarding business strategy and IT strategy, business processes and work flow

·        Flexibility in approach and thought process

·        Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Pune, Mumbai
3 - 8 yrs
₹15L - ₹30L / yr
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
Generative AI
Large Language Models (LLM)
Llama
+10 more

About the Role

We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.


Key Responsibilities

  • Design and implement end-to-end Generative AI solutions for customers on AWS.
  • Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
  • Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
  • Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
  • Provide technical guidance and mentorship to junior data scientists and engineers.
  • Develop reusable components and reference architectures to accelerate delivery.
  • Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.


Required Skills & Experience

  • 4–8 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
  • Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
  • Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
  • Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
  • Strong problem-solving skills and ability to work independently in customer-facing roles.
  • Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.


Soft Skills

  • Strong verbal and written communication for effective customer engagement.
  • Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
  • Team-oriented with a proactive attitude and strong ownership mindset.


What We Offer

  • Opportunity to work on cutting-edge Generative AI projects across industries.
  • Collaborative, startup-like work environment with flexibility and ownership.
  • Exposure to full-stack AI/ML project lifecycle and client-facing roles.
  • Competitive compensation and learning opportunities in the AWS AI ecosystem.


About Oneture Technologies

Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.


Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Hyderabad
4 - 8 yrs
₹18L - ₹30L / yr
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
RESTful APIs
CI/CD
+3 more

Job Overview:

We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.


Responsibilities:


  • Design, develop, and maintain backend services and microservices.
  • Build and integrate RESTful APIs across distributed systems.
  • Ensure performance, scalability, and reliability of backend systems.
  • Collaborate with cross-functional teams and participate in agile development.
  • Deploy and maintain applications on AWS cloud infrastructure.
  • Contribute to automation initiatives and AI/ML feature integration.
  • Write clean, testable, and maintainable code following best practices.
  • Participate in code reviews and technical discussions.


Required Skills:

  • 4+ years of backend development experience.
  • Strong proficiency in Java and Spring/Spring Boot frameworks.
  • Solid understanding of microservices architecture.
  • Experience with REST APIs, CI/CD, and debugging complex systems.
  • Proficient in AWS services such as EC2, Lambda, S3.
  • Strong analytical and problem-solving skills.
  • Excellent communication in English (written and verbal).


Good to Have:

  • Experience with automation tools like Workato or similar.
  • Hands-on experience with Python development.
  • Familiarity with AI/ML features or API integrations.
  • Comfortable working with US-based teams (flexible hours).


Read more
Rest The Case Services Pvt Ltd
Rachana Deshpande
Posted by Rachana Deshpande
Pune
0 - 1 yrs
₹5000 - ₹10000 / mo
skill iconPython
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
FastAPI

Role Overview

We're looking for a highly motivated AI Intern to join our dynamic team. This isn't your average internship; you'll be diving headfirst into the exciting world of Natural Language Processing (NLP) and Large Language Models (LLMs). You will work on real-world projects, contributing directly to our core products by researching, developing, and fine-tuning state-of-the-art language models. If you're passionate about making machines understand and generate human language, this is the perfect role for you!


Skills Needed:

  1. Python
  2. ML
  3. NLP
  4. LLM Fine-Tuning
  5. Fast API


What You'll Do (Key Responsibilities)

Develop & Implement LLM Models: Assist in building and deploying LLM solutions for tasks like sentiment analysis, text summarization, named entity recognition (NER), and question-answering.


Fine-Tune LLMs: Work hands-on with pre-trained Large Language Models (like Llama, GPT, BERT) and fine-tune them on our custom datasets to enhance performance for specific tasks.


Data Pipeline Management: Be responsible for data preprocessing, cleaning, and augmentation to create high-quality datasets for training and evaluation.


Experiment & Evaluate: Research and experiment with different model architectures and fine-tuning strategies (e.g., LoRA, QLoRA) to optimize for accuracy, speed, and cost.


Collaborate & Document: Work closely with our senior ML engineers and data scientists, actively participating in code reviews and clearly documenting your findings and methodologies.


Must-Have Skills (Qualifications)

Strong Python Proficiency: You live and breathe Python and are comfortable with its data science ecosystem (Pandas, NumPy, Scikit-learn).


Solid ML & NLP Fundamentals: A strong theoretical understanding of machine learning algorithms, deep learning concepts, and core NLP techniques (e.g., tokenization, embeddings, attention mechanisms).


Deep Learning Frameworks: Hands-on experience with either PyTorch or TensorFlow.


Familiarity with LLMs: You understand the basics of transformer architecture and have some exposure to working with or fine-tuning Large Language Models.


Problem-Solving Mindset: An analytical and curious approach to tackling complex challenges.


Educational Background: Currently pursuing or recently graduated with a degree in Computer Science, AI, Data Science, or a related technical field.


Brownie Points For (Preferred Skills)

Experience with the Hugging Face ecosystem (Transformers, Datasets, Tokenizers).


A portfolio of personal or academic projects on GitHub showcasing your AI/ML skills.


Familiarity with vector databases (e.g., Pinecone, ChromaDB).

Read more
Data Axle

at Data Axle

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
3 - 6 yrs
Upto ₹28L / yr (Varies
)
skill iconPython
PySpark
SQL
skill iconAmazon Web Services (AWS)
databricks
+1 more


Data Pipeline Development: Design and implement scalable data pipelines using PySpark and Databricks on AWS cloud infrastructure

ETL/ELT Operations: Extract, transform, and load data from various sources using Python, SQL, and PySpark for batch and streaming data processing

Databricks Platform Management: Develop and maintain data workflows, notebooks, and clusters in Databricks environment for efficient data processing

AWS Cloud Services: Utilize AWS services including S3, Glue, EMR, Redshift, Kinesis, and Lambda for comprehensive data solutions

Data Transformation: Write efficient PySpark scripts and SQL queries to process large-scale datasets and implement complex business logic

Data Quality & Monitoring: Implement data validation, quality checks, and monitoring solutions to ensure data integrity across pipelines

Collaboration: Work closely with data scientists, analysts, and other engineering teams to support analytics and machine learning initiatives

Performance Optimization: Monitor and optimize data pipeline performance, query efficiency, and resource utilization in Databricks and AWS environments

Required Qualifications:

Experience: 3+ years of hands-on experience in data engineering, ETL development, or related field

PySpark Expertise: Strong proficiency in PySpark for large-scale data processing and transformations

Python Programming: Solid Python programming skills with experience in data manipulation libraries (pandas etc)

SQL Proficiency: Advanced SQL skills including complex queries, window functions, and performance optimization

Databricks Experience: Hands-on experience with Databricks platform, including notebook development, cluster management, and job scheduling

AWS Cloud Services: Working knowledge of core AWS services (S3, Glue, EMR, Redshift, IAM, Lambda)

Data Modeling: Understanding of dimensional modeling, data warehousing concepts, and ETL best practices

Version Control: Experience with Git and collaborative development workflows


Preferred Qualifications:

Education: Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field

Advanced AWS: Experience with additional AWS services like Athena, QuickSight, Step Functions, and CloudWatch

Data Formats: Experience working with various data formats (JSON, Parquet, Avro, Delta Lake)

Containerization: Basic knowledge of Docker and container orchestration

Agile Methodology: Experience working in Agile/Scrum development environments

Business Intelligence Tools: Exposure to BI tools like Tableau, Power BI, or Databricks SQL Analytics


Technical Skills Summary:

Core Technologies:

  • PySpark & Spark SQL
  • Python (pandas, boto3)
  • SQL (PostgreSQL, MySQL, Redshift)
  • Databricks (notebooks, clusters, jobs, Delta Lake)

AWS Services:

  • S3, Glue, EMR, Redshift
  • Lambda, Athena
  • IAM, CloudWatch

Development Tools:

  • Git/GitHub
  • CI/CD pipelines, Docker
  • Linux/Unix command line


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune
5 - 15 yrs
Best in industry
skill iconPython
Terraform


Senior SRE Developer 

 

The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability.

Responsibilities:

  • ● Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health.
  • ● Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role).
  • ● Implement automated solutions for incident response, system optimization, and reliability improvement.

Requirements: Software Development:

  • ● 3+ years of professional Python development experience.
  • ● Strong grasp of Python object-oriented programming concepts and inheritance.
  • ● Experience developing multi-threaded Python applications.
  • ● 2+ years of experience using Terraform, with proficiency in creating modules and submodules
  • from scratch.
  • ● Proficiency or willingness to learn Golang.
  • Operating Systems:
  • ● Experience with Linux operating systems.
  • ● Strong understanding of monitoring critical system health parameters.
  • Cloud:
  • ● 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS.
  • ● AWS Associate-level certification or higher preferred. Networking:

● Basic understanding of network protocols: ○ TCP/IP

○ DNS

○ HTTP

○ Load balancing concepts

Additional Qualifications (Preferred):

● Familiarity with trading systems and low-latency environments is advantageous but not required.


Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Pune
4 - 9 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconPython
SQL
PySpark
XGBoost

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.


Data Axle Pune is pleased to have achieved certification as a Great Place to Work!


Roles & Responsibilities:

We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.


We are looking for a Senior Data Scientist who will be responsible for:

  1. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
  2. Design or enhance ML workflows for data ingestion, model design, model inference and scoring
  3. Oversight on team project execution and delivery
  4. If senior, establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
  5. Visualize and publish model performance results and insights to internal and external audiences


Qualifications:

  1. Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
  2. Minimum of 3.5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
  4. Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
  5. Proficiency in Python and SQL required; PySpark/Spark experience a plus
  6. Ability to conduct a productive peer review and proper code structure in Github
  7. Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
  8. Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.


It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
ProDT Consulting Sevices pvt LTD
Pune
1 - 2 yrs
₹1.8L - ₹4.5L / yr
skill iconPython
skill iconFlask
RESTful APIs
skill iconMongoDB
MySQL
+8 more

Job Description: Python Developer  

Location: Pune   Employment Type: Full-Time   Experience: 0.6-1+ year 

Role Overview


We are looking for a skilled Python Developer with 0.6-1+ years of experience to join our team. The ideal candidate should have hands-on experience in Python, REST APIs, Flask, and databases. You will be responsible for designing, developing, and maintaining scalable backend services.  

Key Responsibilities  

  • Develop, test, and maintain high-quality Python applications.  
  • Design and build RESTful APIs using Flask.  
  • Integrate APIs with front-end and third-party services.  
  • Work with relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB,  Redis).  
  • Optimize performance and troubleshoot issues in backend applications.  
  • Collaborate with cross-functional teams to define and implement new features.  
  • Follow best practices for code quality, security, and performance optimization.  

Required Skills  

  • Strong proficiency in Python (0.6-1+ years).  
  • Experience with Flask (or FastAPI/Django).  
  • Hands-on experience with REST API development.  
  • Proficiency in working with databases (SQL & NoSQL).  
  • Familiarity with Git, Docker, and CI/CD pipelines is a plus.  

Preferred Qualifications  

  • Bachelor's degree in Computer Science, Engineering, or a related field.  
  • Experience working in Agile/Scrum environments.  
  • Ability to write clean, scalable, and well-documented code. 
Read more
Highfly Sourcing

at Highfly Sourcing

2 candid answers
Highfly Hr
Posted by Highfly Hr
Dubai, Augsburg, Germany, Zaragoza (Spain), Qatar, Salalah (Oman), Kuwait, Lebanon, Marseille (France), Genova (Italy), Winnipeg (Canada), Denmark, Poznan (Poland), Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Hyderabad, Pune
3 - 10 yrs
₹25L - ₹30L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+14 more

Job Description

We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.


Preferred Qualifications

  • Experience with microservices architecture.
  • Knowledge of cloud platforms (AWS, Azure).
  • Familiarity with Agile/Scrum methodologies.
  • Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.


Requirment Details

Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).

Proven experience as a Java Developer or similar role.

Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).

Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.

Familiarity with RESTful APIs and web services.

Understanding of version control systems (e.g., Git).

Solid understanding of object-oriented programming (OOP) principles.

Strong problem-solving skills and attention to detail.

Read more
Mumbai, Pune, Hyderabad, Bengaluru (Bangalore), Panchkula, Mohali
5 - 8 yrs
₹10L - ₹20L / yr
skill iconPython
FastAPI
skill iconFlask
skill iconDjango
skill iconGit

Job Title: Python Developer (FastAPI)

Experience Required: 4+ years

Location: Pune, Bangalore, Hyderabad, Mumbai, Panchkula, Mohali 

Shift: Night Shift 6:30 pm to 3:30 AM IST

About the Role

We are seeking an experienced Python Developer with strong expertise in FastAPI to join our engineering team. The ideal candidate should have a solid background in backend development, RESTful API design, and scalable application development.


Required Skills & Qualifications

· 4+ years of professional experience in backend development with Python.

· Strong hands-on experience with FastAPI (or Flask/Django with migration experience).

· Familiarity with asynchronous programming in Python.

· Working knowledge of version control systems (Git).

· Good problem-solving and debugging skills.

· Strong communication and collaboration abilities.

Read more
IT Industry - Night Shifts

IT Industry - Night Shifts

Agency job
Bengaluru (Bangalore), Hyderabad, Mumbai, Navi Mumbai, Pune, Mohali, Delhi
5 - 10 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
IT infrastructure
skill iconMachine Learning (ML)
DevOps
Automation
+1 more

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀


We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.

If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.


What you’ll do:

🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)

🔹 Build highly available, multi-region solutions for real-time & batch inference

🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines

🔹 Ensure security, compliance, and cost efficiency

🔹 Collaborate across DevOps, ML, and backend teams


What we’re looking for:

✔️ 6+ years AWS cloud infrastructure experience

✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)

✔️ Proficiency in Python/Go/Bash scripting

✔️ Knowledge of networking, IAM, and security best practices

✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)


✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)


📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi

5 days working, Work from Office

Night shifts: 9pm to 6am IST

👉 If this sounds like you (or someone you know), let’s connect!


Apply here:

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Pune
7 - 13 yrs
Best in industry
skill iconPython
skill iconDjango
RESTful APIs
Microservices
Generative AI
+2 more

7+ years of experience in Python Development

Good experience in Microservices and APIs development.

Must have exposure to large scale data

Good to have Gen AI experience

Code versioning and collaboration. (Git)

Knowledge for Libraries for extracting data from websites.

Knowledge of SQL and NoSQL databases

Familiarity with RESTful APIs

Familiarity with Cloud (Azure /AWS) technologies


About Wissen Technology:


• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.

• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.

• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.

• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.

• Globally present with offices US, India, UK, Australia, Mexico, and Canada.

• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.

• Wissen Technology has been certified as a Great Place to Work®.

• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.

• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.

Website : www.wissen.com


Read more
Nagpur, Maharashtra, Mumbai, Pune
2 - 4 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
FastAPI
skill iconAmazon Web Services (AWS)
skill iconJavascript
+5 more

Job Title : Software Development Engineer (Python, Django & FastAPI + React.js)

Experience : 2+ Years

Location : Nagpur / Remote (India)

Job Type : Full Time

Collaboration Hours : 11:00 AM – 7:00 PM IST


About the Role :

We are seeking a Software Development Engineer to join our growing team. The ideal candidate will have strong expertise in backend development with Python, Django, and FastAPI, as well as working knowledge of AWS.

While backend development is the primary focus, you should also be comfortable contributing to frontend development using JavaScript, TypeScript, and React.


Mandatory Skills : Python, Django, FastAPI, AWS, JavaScript/TypeScript, React, REST APIs, SQL/NoSQL.


Key Responsibilities :

  • Design, develop, and maintain backend services using Python (Django / FastAPI).
  • Deploy, scale, and manage applications on AWS cloud services.
  • Collaborate with frontend developers and contribute to React (JS/TS) development when required.
  • Write clean, efficient, and maintainable code following best practices.
  • Ensure system performance, scalability, and security.
  • Participate in the full software development lifecycle : planning, design, development, testing, and deployment.
  • Work collaboratively with cross-functional teams to deliver high-quality solutions.

Requirements :

  • Bachelor’s degree in Computer Science, Computer Engineering, or related field.
  • 2+ years of professional software development experience.
  • Strong proficiency in Python, with hands-on experience in Django and FastAPI.
  • Practical experience with AWS cloud services.
  • Basic proficiency in JavaScript, TypeScript, and React for frontend development.
  • Solid understanding of REST APIs, databases (SQL/NoSQL), and software design principles.
  • Familiarity with Git and collaborative workflows.
  • Strong problem-solving ability and adaptability in a fast-paced environment.

Good to Have :

  • Experience with Docker for containerization.
  • Knowledge of CI/CD pipelines and DevOps practices.
Read more
Hunarstreet Technologies

Hunarstreet Technologies

Agency job
via Hunarstreet Technologies pvt ltd by Priyanka Londhe
Mumbai, Pune, Bengaluru (Bangalore), Hyderabad, Panchkula, Mohali
5 - 8 yrs
₹15L - ₹22L / yr
skill iconPython
FastAPI
skill iconDjango
skill iconFlask
backend development
+2 more

Required Skills & Qualifications

  • 4+ years of professional experience in backend development with Python.
  • Strong hands-on experience with FastAPI (or Flask/Django with migration experience).
  • Familiarity with asynchronous programming in Python.
  • Working knowledge of version control systems (Git).
  • Good problem-solving and debugging skills.
  • Strong communication and collaboration abilities.
  • should have a solid background in backend development, RESTful API design, and scalable application development.


Shift: Night Shift 6:30 pm to 3:30 AM IST

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Pune
5 - 7 yrs
Upto ₹22L / yr (Varies
)
skill iconPython
SQL
ETL
Data modeling
Spark
+6 more

Role Overview

We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.

You’ll also play a mentorship role and help establish strong engineering practices across our data projects.

Key Responsibilities

  • Design and develop large-scale, distributed data pipelines (batch and streaming)
  • Implement scalable data models, warehouses/lakehouses, and data lakes
  • Translate business requirements into technical data solutions
  • Optimize data pipelines for performance and reliability
  • Ensure code is clean, modular, tested, and documented
  • Contribute to architecture, tooling decisions, and platform setup
  • Review code/design and mentor junior engineers

Must-Have Skills

  • Strong programming skills in Python and advanced SQL
  • Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
  • Hands-on experience with frameworks like Apache Spark, Flink, etc.
  • Experience with orchestration tools like Airflow
  • Familiarity with CI/CD pipelines and Git
  • Ability to debug and scale data pipelines in production

Preferred Skills

  • Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
  • Exposure to Databricks, dbt, or similar tools
  • Understanding of data governance, quality frameworks, and observability
  • Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus

What We’re Looking For

  • Problem-solver with strong analytical skills and attention to detail
  • Fast learner who can adapt across tools, tech stacks, and domains
  • Comfortable working in fast-paced, client-facing environments
  • Willingness to travel within India when required
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Pune, Mumbai, Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
snowflake
Apache Airflow
ETL
skill iconPython
PySpark
+1 more

Job Summary:

We are looking for a highly skilled and experienced Data Engineer with deep expertise in Airflow, dbt, Python, and Snowflake. The ideal candidate will be responsible for designing, building, and managing scalable data pipelines and transformation frameworks to enable robust data workflows across the organization.

Key Responsibilities:

  • Design and implement scalable ETL/ELT pipelines using Apache Airflow for orchestration.
  • Develop modular and maintainable data transformation models using dbt.
  • Write high-performance data processing scripts and automation using Python.
  • Build and maintain data models and pipelines on Snowflake.
  • Collaborate with data analysts, data scientists, and business teams to deliver clean, reliable, and timely data.
  • Monitor and optimize pipeline performance and troubleshoot issues proactively.
  • Follow best practices in version control, testing, and CI/CD for data projects.

Must-Have Skills:

  • Strong hands-on experience with Apache Airflow for scheduling and orchestrating data workflows.
  • Proficiency in dbt (data build tool) for building scalable and testable data models.
  • Expert-level skills in Python for data processing and automation.
  • Solid experience with Snowflake, including SQL performance tuning, data modeling, and warehouse management.
  • Strong understanding of data engineering best practices including modularity, testing, and deployment.

Good to Have:

  • Experience working with cloud platforms (AWS/GCP/Azure).
  • Familiarity with CI/CD pipelines for data (e.g., GitHub Actions, GitLab CI).
  • Exposure to modern data stack tools (e.g., Fivetran, Stitch, Looker).
  • Knowledge of data security and governance best practices.


Note : One face-to-face (F2F) round is mandatory, and as per the process, you will need to visit the office for this.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Bengaluru (Bangalore), Mumbai, Pune
4 - 8 yrs
Best in industry
skill iconPython
API
RESTful APIs
skill iconFlask
ETL
+1 more
  • 4= years of experience
  • Proficiency in Python programming.
  • Experience with Python Service Development (RestAPI/FlaskAPI)
  • Basic knowledge of front-end development.
  • Basic knowledge of Data manipulation and analysis libraries
  • Code versioning and collaboration. (Git)
  • Knowledge for Libraries for extracting data from websites.
  • Knowledge of SQL and NoSQL databases
  • Familiarity with Cloud (Azure /AWS) technologies


Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad
1 - 2 yrs
₹6L - ₹12L / yr
QML
Qt
skill iconC++
skill iconPython

We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.

Key Responsibilities

  • Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
  • Translate system requirements into responsive, interactive, and user-friendly interfaces.
  • Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
  • Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
  • Optimise application performance for low-latency visualisation in mission-critical environments.
  • Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
  • Write clean, testable, and maintainable code while adhering to aerospace software standards.
  • Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
  • Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.

Required Qualifications

  • Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
  • 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
  • Strong proficiency in C++ (11/14/17) and object-oriented programming.
  • Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
  • Knowledge of multithreading, performance optimisation, and memory management.
  • Familiarity with aerospace/automotive domain software practices or mission-critical systems.
  • Good understanding of UX principles for operator consoles and mission planning systems.
  • Strong problem-solving, debugging, and communication skills.

Desirable Skills

  • Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
  • Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
  • Exposure to DO-178C or aerospace software compliance.
  • Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
  • Experience with Linux and cross-platform development (Windows/Linux).
  • Scripting knowledge in Python for tooling and automation.
  • Background in defence, aerospace, automotive or embedded systems domain.

What We Offer

  • Opportunity to work on cutting-edge aerospace and defence technologies.
  • Collaborative and innovation-driven work culture.
  • Exposure to real-world avionics and mission systems.
  • Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.
Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
Google Cloud Platform (GCP)
openshift
skill iconPython
+11 more

Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location- Pune/ Chennai


Job Type- Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform 
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform. 
  • Communicate effectively with people having differing levels of technical knowledge.  
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment 
  • Provide customers with complex application support, problem diagnosis and problem resolution 

Required Qualifications:    

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.  
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.  
  • Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent 
  • 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS) 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills 
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconKubernetes
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type:Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Remote, Pune
5 - 10 yrs
Best in industry
skill iconC++
skill iconDocker
skill iconKubernetes
ECS
skill iconAmazon Web Services (AWS)
+12 more

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative

mindset.


Responsibilities:

  • Design, build, and maintain high-performance systems using modern C++
  • Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
  • Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
  • managing applications in the cloud.
  • Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
  • Participate in system design, peer code reviews, and performance tuning


Qualifications:

  • 5+ years of software development experience, with strong command over modern C++
  • Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
  • Apache Airflow for orchestrating complex data workflows.
  • EKS (Elastic Kubernetes Service) for managing containerized workloads.
  • Proven expertise in designing and managing robust data pipelines & Microservices.
  • Proficient in building and scaling data processing workflows and working with structured/unstructured data
  • Strong hands-on experience with Docker, container orchestration, and microservices architecture
  • Working knowledge of CI/CD practices, Git, and build/release tools
  • Strong problem-solving, debugging, and cross-functional collaboration skills


This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconDocker
openshift
network performance
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana: 

Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
IT MNC

IT MNC

Agency job
via FIRST CAREER CENTRE by Aisha Fcc
Bengaluru (Bangalore), Noida, Hyderabad, Pune, Chennai
4 - 8 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconJavascript
frappe

Development and Customization:


Build and customize Frappe modules to meet business requirements.


Develop new functionalities and troubleshoot issues in ERPNext applications.


Integrate third-party APIs for seamless interoperability.


Technical Support:


Provide technical support to end-users and resolve system issues.


Maintain technical documentation for implementations.


Collaboration:


Work with teams to gather requirements and recommend solutions.


Participate in code reviews for quality standards.


Continuous Improvement:


Stay updated with Frappe developments and optimize application performance.


Skills Required:

Proficiency in Python, JavaScript, and relational databases.


Knowledge of Frappe/ERPNext framework and object-oriented programming.


Experience with Git for version control.


Strong analytical skill

Read more
Foto Owl AI
Pune
1 - 3 yrs
₹5L - ₹6L / yr
SQL
skill iconPython
skill iconDocker
RESTful APIs
FastAPI
+2 more

🚀 We’re Hiring: Senior Python Backend Developer 🚀


📍 Location: Baner, Pune (Work from Office)

💰 Compensation: ₹6 LPA

🕑 Experience Required: Minimum 2 years as a Python Backend Developer



About Us

Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.


We specialize in:

⚡ Hyper-personalized fan engagement

🤖 AI-powered real-time photo sharing

📸 Advanced media asset management



What You’ll Do


As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.


Architect and develop complex, secure, and scalable backend services

Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms

Optimize SQL & NoSQL databases for high performance

Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)

Implement observability, monitoring, and security best practices

Collaborate cross-functionally with product & AI teams

Mentor junior developers and conduct code reviews

Troubleshoot and resolve production issues with efficiency



What We’re Looking For


✅ Strong expertise in Python backend development

✅ Solid knowledge of Data Structures & Algorithms

✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)

✅ Proficiency in RESTful APIs & Microservice design

✅ Knowledge of Docker, Kubernetes, and cloud-native systems

✅ Experience managing AWS-based deployments



Why Join Us?


At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.

Read more
empowers digital transformation for innovative and high grow

empowers digital transformation for innovative and high grow

Agency job
via Hirebound by Jebin Joy
Pune
4 - 12 yrs
₹12L - ₹30L / yr
Hadoop
Spark
Apache Kafka
ETL
skill iconJava
+2 more

To be successful in this role, you should possess

• Collaborate closely with Product Management and Engineering leadership to devise and build the

right solution.

• Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big

Data tools and frameworks required to solve Big Data problems at scale.

• Design and implement systems to cleanse, process, and analyze large data sets using distributed

processing tools like Akka and Spark.

• Understanding and critically reviewing existing data pipelines, and coming up with ideas in

collaboration with Technical Leaders and Architects to improve upon current bottlenecks

• Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior

Individual contributor on the multiple products and features we have.

• 3+ years of experience in developing highly scalable Big Data pipelines.

• In-depth understanding of the Big Data ecosystem including processing frameworks like Spark,

Akka, Storm, and Hadoop, and the file types they deal with.

• Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.

• Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design

Patterns when required.

• Experience with Git and build tools like Gradle/Maven/SBT.

• Strong understanding of object-oriented design, data structures, algorithms, profiling, and

optimization.

• Have elegant, readable, maintainable and extensible code style.


You are someone who would easily be able to

• Work closely with the US and India engineering teams to help build the Java/Scala based data

pipelines

• Lead the India engineering team in technical excellence and ownership of critical modules; own

the development of new modules and features

• Troubleshoot live production server issues.

• Handle client coordination and be able to work as a part of a team, be able to contribute

independently and drive the team to exceptional contributions with minimal team supervision

• Follow Agile methodology, JIRA for work planning, issue management/tracking


Additional Project/Soft Skills:

• Should be able to work independently with India & US based team members.

• Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.

• Strong sense of urgency, with a passion for accuracy and timeliness.

• Ability to work calmly in high pressure situations and manage multiple projects/tasks.

• Ability to work independently and possess superior skills in issue resolution.

• Should have the passion to learn and implement, analyze and troubleshoot issues

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Mumbai, Pune, Noida
4 - 6 yrs
₹3L - ₹21L / yr
AWS Data Engineer
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
databricks
+1 more

 Key Responsibilities

  • Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
  • Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
  • Perform data wrangling, cleansing, and transformation using Python and SQL
  • Collaborate with data scientists to integrate Generative AI models into analytics workflows
  • Build dashboards and reports to visualize insights using tools like Power BI or Tableau
  • Ensure data quality, governance, and security across all data assets
  • Optimize performance of data pipelines and troubleshoot bottlenecks
  • Work closely with stakeholders to understand data requirements and deliver actionable insights

🧪 Required Skills

Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker

📚 Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, or related field
  • 3+ years of experience in data engineering or data analytics
  • Hands-on experience with Databricks, PySpark, and AWS
  • Familiarity with Generative AI tools and frameworks is a strong plus
  • Strong problem-solving and communication skills

🌟 Preferred Traits

  • Analytical mindset with attention to detail
  • Passion for data and emerging technologies
  • Ability to work independently and in cross-functional teams
  • Eagerness to learn and adapt in a fast-paced environment


Read more
Reliable Group IN
43EQ Smartworks,Balewadi High Street,Pune
5 - 10 yrs
₹10L - ₹25L / yr
Terraform
skill iconAmazon Web Services (AWS)
skill iconPython
Scripting
cost optimization
+1 more

Position: General Cloud Automation Engineer/General Cloud Engineer

Location-Balewadi High Street,Pune

Key Responsibilities:

  • Strategic Automation Leadership
  • Drive automation to improve deployment speed and reduce manual work.
  • Promote scalable, long-term automation solutions.
  • Infrastructure as Code (IaC) & Configuration Management
  • Develop IaC using Terraform, CloudFormation, Ansible.
  • Maintain infrastructure via Ansible, Puppet, Chef.
  • Scripting in Python, Bash, PowerShell, JavaScript, GoLang.
  • CI/CD & Cloud Optimization
  • Enhance CI/CD using Jenkins, GitHub Actions, GitLab CI/CD.
  • Automate across AWS, Azure, GCP, focusing on performance, networking, and cost-efficiency.
  • Integrate monitoring tools such as Prometheus, Grafana, Datadog, ELK.
  • Security Automation
  • Enforce security with tools like Vault, Snyk, Prisma Cloud.
  • Implement automated compliance and access controls.
  • Innovation & Continuous Improvement
  • Evaluate and adopt emerging automation tools.
  • Foster a forward-thinking automation culture.


Required Skills & Tools:

Strong background in automation, DevOps, and cloud engineering.


Expert in:

IaC: Terraform, CloudFormation, Azure ARM, Bicep

Config Mgmt: Ansible, Puppet, Chef

Cloud Platforms: AWS, Azure, GCP

CI/CD: Jenkins, GitHub Actions, GitLab CI/CD

Scripting: Python, Bash, PowerShell, JavaScript, GoLang

Monitoring & Security: Prometheus, Grafana, ELK, Vault, Prisma Cloud

Network Automation: Private Endpoints, Transit Gateways, Firewalls, etc.


Certifications Preferred:

AWS DevOps Engineer

Terraform Associate

Red Hat Certified Engineer


Read more
UniAthena
HR Athena
Posted by HR Athena
Pune
3 - 7 yrs
₹5L - ₹10L / yr
skill iconPython
PowerBI
SQL
skill iconMachine Learning (ML)
Predictive modelling
+1 more

Job Requirement :

  • 3-5 Years of experience in Data Science
  • Strong expertise in statistical modeling, machine learning, deep learning, data warehousing, ETL, and reporting tools.
  • Bachelors/ Masters in Data Science, Statistics, Computer Science, Business Intelligence,
  • Experience with relevant programming languages and tools such as Python, R, SQL, Spark, Tableau, Power BI.
  • Experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn
  • Ability to think strategically and translate data insights into actionable business recommendations.
  • Excellent problem-solving and analytical skills
  • Adaptability and openness towards changing environment and nature of work
  • This is a startup environment with evolving systems and procedures, the ideal candidate will be comfortable working in a fast-paced, dynamic environment and will have a strong desire to make a significant impact on the business.

Job Roles & Responsibilities:

  • Conduct in-depth analysis of large-scale datasets to uncover insights and trends.
  • Build and deploy predictive and prescriptive machine learning models for various applications.
  • Design and execute A/B tests to evaluate the effectiveness of different strategies.
  • Collaborate with product managers, engineers, and other stakeholders to drive data-driven decision-making.
  • Stay up-to-date with the latest advancements in data science and machine learning.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort