50+ Python Jobs in Pune | Python Job openings in Pune
Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.



Strong proficiency in Python and experience with ML libraries (Scikitlearn TensorFlow, PyTorch, scikit-learn) -ML & DS: Understanding of concept ML Modeling and Evaluation Metrix -Containerization & Orchestration:
Hands-on experience with Docker and Kubernetes for deploying ML models -CI/CD Pipelines:
Experience building automated CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions,
Experience with AWS,
Workflow Automation: Kubeflow/Airflow/MLFlow -Model Monitoring: Familiarity/Understanding with monitoring tools and techniques for ML models (e.g., Prometheus, Grafana, Seldon, Evidently AI) Version Control: Experience with Git: managing code and model versioning


- 4= years of experience
- Proficiency in Python programming.
- Experience with Python Service Development (RestAPI/FlaskAPI)
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with Cloud (Azure /AWS) technologies

Demonstrated experience as a Python developer i
Good understanding and practical experience with Python frameworks including Django, Flask, and Bottle Proficient with Amazon Web Service and experienced in working with API’s
Solid understanding of databases SQL and mySQL Experience and knowledge of JavaScript is a benefit
Highly skilled with attention to detail
Good mentoring and leadership abilities
Excellent communication skills Ability to prioritize and manage own workload


We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.

Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.


Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


A modern work platform means a single source of truth for your desk and deskless employees alike, where everything they need is organized and easy to find.
MangoApps was designed to unify your employee experience by combining intranet, communication, collaboration, and training into one intuitive, mobile-accessible workspace.
We are looking for a highly capable machine learning engineer to optimize our machine learning systems. You will be evaluating existing machine learning (ML) processes, performing statistical analysis to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in a related ML role. A machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software.
AI/ML Engineer Responsibilities
- Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models.
- Transforming data science prototypes and applying appropriate ML algorithms and tools.
- Ensuring that algorithms generate accurate user recommendations.
- Turning unstructured data into useful information by auto-tagging images and text-to-speech conversions.
- Solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks.
- Developing ML algorithms to huge volumes of historical data to make predictions.
- Running tests, performing statistical analysis, and interpreting test results.
- Documenting machine learning processes.
- Keeping abreast of developments in machine learning.
AI/ML Engineer Requirements
- Bachelor's degree in computer science, data science, mathematics, or a related field with at least 5+yrs of experience as an AI/ML Engineer
- Advanced proficiency with Python and FastAPI framework along with good exposure to libraries like scikit-learn, Pandas, NumPy etc..
- Experience in working on ChatGPT, LangChain (Must), Large Language Models (Good to have) & Knowledge Graphs
- Extensive knowledge of ML frameworks, libraries, data structures, data modelling, and software architecture.
- In-depth knowledge of mathematics, statistics, and algorithms.
- Superb analytical and problem-solving abilities.
- Great communication and collaboration skills.
Skills: fastapi,python,large language models,large language model,pandas,artificial intelligence,mathematics,machine learning,knowledge graphs,flask,python for data analysis,scikit-learn,langchain,algorithms,data science,chatgpt,numpy,statistics
Why work with us
- We take delight in what we do, and it shows in the products we offer and ratings of our products by leading industry analysts like IDC, Forrester and Gartner OR independent sites like Capterra.
- Be part of the team that has a great product-market fit, solving some of the most relevant communication and collaboration challenges faced by big and small organizations across the globe.
- MangoApps is highly collaborative place and careers at MangoApps come with a lot of growth and learning opportunities. If you’re looking to make an impact, MangoApps is the place for you.
- We focus on getting things done and know how to have fun while we do them. We have a team that brings creativity, energy, and excellence to every engagement.
- A workplace that was listed as one of the top 51 Dream Companies to work for by World HRD Congress in 2019.
- As a group, we are flat and treat everyone the same.
Benefits
We are a young organization and growing fast. Along with the fantastic workplace culture that helps you meet your career aspirations; we provide some comprehensive benefits.
1. Comprehensive Health Insurance for Family (Including Parents) with no riders attached.
2. Accident Insurance for each employee.
3. Sponsored Trainings, Courses and Nano Degrees.
About You
· Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
· Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment.
· Entrepreneurial: You thrive in a fast-paced, changing environment and you’re excited by the chance to play a large role.
· Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
· Thrive in a start-up mentality with a “whatever it takes” attitude.



Full Stack Mobile Application Developer
No of positions - 2
Strictly in-office position
Seniority - 1.5-4 years
About Wednesday
Wednesday is an engineering services company specializing in Data Engineering, applied AI, and Product Engineering. We partner with ambitious companies to solve their most pressing engineering challenges.
Job Description
We seek a highly skilled Fullstack Mobile Developer who is passionate about crafting exceptional mobile experiences and robust backend systems. This role demands a deep understanding of React Native, Node.js, and modern cloud ecosystems like AWS, combined with a commitment to best practices and continuous improvement.
As part of our team, you will work closely with cross-functional teams to design, build, and maintain high-performance mobile applications and scalable backend solutions for our clients. The ideal candidate is a team player with a growth mindset, a passion for excellence, and the ability to energise and inspire those around them.
Key Responsibilities
Mobile Development
- Design, develop, and maintain high-quality, scalable mobile applications using React Native.
- Implement responsive UI/UX designs that deliver seamless user experiences across devices.
- Leverage modern development techniques and tools to ensure robust and maintainable codebases.
Backend Engineering
- Build and maintain scalable backend systems using Python or Node.js and cloud technologies like AWS.
- Design and manage relational (SQL) and non-relational (NoSQL) databases to support application functionality.
- Develop RESTful APIs and integrations to power mobile applications and services.
Programming Best Practices
- Use programming best practices, including code reviews, automated testing, and documentation.
Collaboration and Communication
- Work with other engineers to deliver on client expectations and project goals.
Continuous Learning and Mentorship
- Demonstrate a commitment to learning new tools, techniques, and technologies to stay at the forefront of mobile and backend development.
Qualifications
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 1.5-4 years of professional experience as a Fullstack Developer.
- Proven expertise in building and scaling mobile applications and backend systems in a services environment.
- Strong proficiency in Python or Node.js, AWS, and database technologies (SQL or NoSQL).
- Knowledge of software engineering best practices, including clean code principles and test-driven development.
- Familiarity with AI copilots and a willingness to incorporate AI-driven tools into workflows.
- Excellent communication and collaboration skills, with the ability to inspire and influence teammates.
- Demonstrated growth mindset, passion for learning, and a commitment to excellence.


Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana:
Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.


As an L3 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 6–9 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or a strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.


As an L1/L2 Data Scientist, you’ll work alongside experienced engineers and data scientists to solve real-world problems using machine learning (ML) and generative AI (GenAI). Beyond classical data science tasks, you’ll contribute to building and fine-tuning large language model (LLM)– based applications, such as chatbots, copilots, and automation workflows.
Key Responsibilities
- Collaborate with business stakeholders to translate problem statements into data science tasks.
- Perform data collection, cleaning, feature engineering, and exploratory data analysis (EDA).
- Build and evaluate ML models using Python and libraries such as scikit-learn and XGBoost.
- Support the development of LLM-powered workflows like RAG (Retrieval-Augmented Generation), prompt engineering, and fine-tuning for use cases including summarization, Q&A, and task automation.
- Contribute to GenAI application development using frameworks like LangChain, OpenAI APIs, or similar ecosystems.
- Work with engineers to integrate models into applications, build/test APIs, and monitor performance post-deployment.
- Maintain reproducible notebooks, pipelines, and documentation for ML and LLM experiments.
- Stay updated on advancements in ML, NLP, and GenAI, and share insights with the team.
Required Skills & Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field.
- 2.5–5 years of experience in data science, ML, or AI (projects and internships included).
- Proficiency in Python with experience in libraries like pandas, NumPy, scikit-learn, and matplotlib.
- Basic exposure to LLMs (e.g., OpenAI, Cohere, Mistral, Hugging Face) or strong interest with the ability to learn quickly.
- Familiarity with SQL and structured data handling.
- Understanding of NLP fundamentals and vector-based retrieval techniques (a plus).
- Strong communication, problem-solving skills, and a proactive attitude.
Nice-to-Have (Not Mandatory)
- Experience with GenAI prototyping using LangChain, LlamaIndex, or similar frameworks.
- Knowledge of REST APIs and model integration into backend systems.
- Familiarity with cloud platforms (AWS/GCP/Azure), Docker, or Git.
About the Role
We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.
Key Responsibilities
- Design and implement end-to-end Generative AI solutions for customers on AWS.
- Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
- Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
- Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
- Provide technical guidance and mentorship to junior data scientists and engineers.
- Develop reusable components and reference architectures to accelerate delivery.
- Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.
Required Skills & Experience
- 4–8 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
- Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
- Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
- Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
- Strong problem-solving skills and ability to work independently in customer-facing roles.
- Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.
Soft Skills
- Strong verbal and written communication for effective customer engagement.
- Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
- Team-oriented with a proactive attitude and strong ownership mindset.
What We Offer
- Opportunity to work on cutting-edge Generative AI projects across industries.
- Collaborative, startup-like work environment with flexibility and ownership.
- Exposure to full-stack AI/ML project lifecycle and client-facing roles.
- Competitive compensation and learning opportunities in the AWS AI ecosystem.
About Oneture Technologies
Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.
Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.

Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill

🚀 We’re Hiring: Senior Python Backend Developer 🚀
📍 Location: Baner, Pune (Work from Office)
💰 Compensation: ₹6 LPA
🕑 Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
⚡ Hyper-personalized fan engagement
🤖 AI-powered real-time photo sharing
📸 Advanced media asset management
What You’ll Do
As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What We’re Looking For
✅ Strong expertise in Python backend development
✅ Solid knowledge of Data Structures & Algorithms
✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
✅ Proficiency in RESTful APIs & Microservice design
✅ Knowledge of Docker, Kubernetes, and cloud-native systems
✅ Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.


Only candidates living in Pune or who are willing to relocate to Pune, please apply:
Job Description:
We are seeking a talented and motivated PHP/Laravel Developer to join our dynamic team. The ideal candidate will be responsible for building and maintaining robust web applications, ensuring top-notch performance, and contributing to the company’s growth by driving technological innovation.
The ideal candidate will contribute directly to developing high-quality, innovative software solutions that exceed client expectations. You will support the company’s vision for technological excellence and innovation, strengthening its competitive edge in the market. You will play a key role in building scalable and future-ready applications that align with long-term business goals.
Key Responsibilities:
You will develop and maintain web applications using PHP, Laravel, MySQL, JavaScript, jQuery, CSS, Bootstrap, and HTML.
You will collaborate with cross-functional teams to define, design, and ship new features.
You will ensure the performance, quality, and responsiveness of applications.
You will identify and correct bottlenecks and fix bugs.
You will help maintain code quality, organization, and automation.
You will implement security and data protection measures.
You need to stay up-to-date with new technologies and industry trends to incorporate them into operations and activities.
Candidate Profile:
Required Qualifications:
Proficiency in PHP, Laravel, MySQL, JavaScript, jQuery, CSS, Bootstrap, and HTML.
Solid understanding of object-oriented programming (OOP) principles.
Experience with version control tools like GIT.
Familiarity with the full software development life cycle (SDLC).
Ability to work independently and in a team environment.
Strong problem-solving skills and attention to detail.
Understanding of RESTful APIs.
Desired Qualifications:
Experience with Angular, React, and Python is a plus.
Knowledge of CI/CD tools like Jenkins, Docker.
Experience with unit and integration testing.
Ability to design and implement database schemas that represent and support business processes

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.
To be successful in this role, you should possess
• Collaborate closely with Product Management and Engineering leadership to devise and build the
right solution.
• Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big
Data tools and frameworks required to solve Big Data problems at scale.
• Design and implement systems to cleanse, process, and analyze large data sets using distributed
processing tools like Akka and Spark.
• Understanding and critically reviewing existing data pipelines, and coming up with ideas in
collaboration with Technical Leaders and Architects to improve upon current bottlenecks
• Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior
Individual contributor on the multiple products and features we have.
• 3+ years of experience in developing highly scalable Big Data pipelines.
• In-depth understanding of the Big Data ecosystem including processing frameworks like Spark,
Akka, Storm, and Hadoop, and the file types they deal with.
• Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.
• Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design
Patterns when required.
• Experience with Git and build tools like Gradle/Maven/SBT.
• Strong understanding of object-oriented design, data structures, algorithms, profiling, and
optimization.
• Have elegant, readable, maintainable and extensible code style.
You are someone who would easily be able to
• Work closely with the US and India engineering teams to help build the Java/Scala based data
pipelines
• Lead the India engineering team in technical excellence and ownership of critical modules; own
the development of new modules and features
• Troubleshoot live production server issues.
• Handle client coordination and be able to work as a part of a team, be able to contribute
independently and drive the team to exceptional contributions with minimal team supervision
• Follow Agile methodology, JIRA for work planning, issue management/tracking
Additional Project/Soft Skills:
• Should be able to work independently with India & US based team members.
• Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.
• Strong sense of urgency, with a passion for accuracy and timeliness.
• Ability to work calmly in high pressure situations and manage multiple projects/tasks.
• Ability to work independently and possess superior skills in issue resolution.
• Should have the passion to learn and implement, analyze and troubleshoot issues

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment

AccioJob is conducting a Walk-In Hiring Drive with one of the top global consulting & services companies for the position of Python Automation Engineer.
To apply, register and select your slot here: https://go.acciojob.com/raeUXs
Required Skills: Python, OOPs, DSA, Aptitude
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Pune (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round,
- Technical Interview 1
- Technical Interview 2
- Tech+Managerial Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/raeUXs

Position: General Cloud Automation Engineer/General Cloud Engineer
Location-Balewadi High Street,Pune
Key Responsibilities:
- Strategic Automation Leadership
- Drive automation to improve deployment speed and reduce manual work.
- Promote scalable, long-term automation solutions.
- Infrastructure as Code (IaC) & Configuration Management
- Develop IaC using Terraform, CloudFormation, Ansible.
- Maintain infrastructure via Ansible, Puppet, Chef.
- Scripting in Python, Bash, PowerShell, JavaScript, GoLang.
- CI/CD & Cloud Optimization
- Enhance CI/CD using Jenkins, GitHub Actions, GitLab CI/CD.
- Automate across AWS, Azure, GCP, focusing on performance, networking, and cost-efficiency.
- Integrate monitoring tools such as Prometheus, Grafana, Datadog, ELK.
- Security Automation
- Enforce security with tools like Vault, Snyk, Prisma Cloud.
- Implement automated compliance and access controls.
- Innovation & Continuous Improvement
- Evaluate and adopt emerging automation tools.
- Foster a forward-thinking automation culture.
Required Skills & Tools:
Strong background in automation, DevOps, and cloud engineering.
Expert in:
IaC: Terraform, CloudFormation, Azure ARM, Bicep
Config Mgmt: Ansible, Puppet, Chef
Cloud Platforms: AWS, Azure, GCP
CI/CD: Jenkins, GitHub Actions, GitLab CI/CD
Scripting: Python, Bash, PowerShell, JavaScript, GoLang
Monitoring & Security: Prometheus, Grafana, ELK, Vault, Prisma Cloud
Network Automation: Private Endpoints, Transit Gateways, Firewalls, etc.
Certifications Preferred:
AWS DevOps Engineer
Terraform Associate
Red Hat Certified Engineer


Job Requirement :
- 3-5 Years of experience in Data Science
- Strong expertise in statistical modeling, machine learning, deep learning, data warehousing, ETL, and reporting tools.
- Bachelors/ Masters in Data Science, Statistics, Computer Science, Business Intelligence,
- Experience with relevant programming languages and tools such as Python, R, SQL, Spark, Tableau, Power BI.
- Experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn
- Ability to think strategically and translate data insights into actionable business recommendations.
- Excellent problem-solving and analytical skills
- Adaptability and openness towards changing environment and nature of work
- This is a startup environment with evolving systems and procedures, the ideal candidate will be comfortable working in a fast-paced, dynamic environment and will have a strong desire to make a significant impact on the business.
Job Roles & Responsibilities:
- Conduct in-depth analysis of large-scale datasets to uncover insights and trends.
- Build and deploy predictive and prescriptive machine learning models for various applications.
- Design and execute A/B tests to evaluate the effectiveness of different strategies.
- Collaborate with product managers, engineers, and other stakeholders to drive data-driven decision-making.
- Stay up-to-date with the latest advancements in data science and machine learning.

About Us
1E9 Advisors is a technology company delivering strategy and solutions. We create and manage software products and provide support with related services across industries.
We have a deep understanding of energy and commodity markets, risk management, and technology, which we weave together to solve problems simply and innovatively. Our team builds modern, reliable systems that help businesses operate efficiently and make smarter decisions.
About the Role
We are looking for highly motivated Python Developer Interns who are eager to learn, take ownership, and contribute meaningfully from day one. This role provides a high-responsibility, high-learning environment where you'll work closely with experienced engineers to build real products and systems. Successful interns would be given a chance to continue in a full time role.
Important Note: This position is not open to applicants who are currently enrolled in full-time degree programs. This internship is designed to transition into a full-time role for successful candidates, so we are seeking candidates who are available for immediate full-time employment upon completion of the internship.
Key Responsibilities
- Develop and maintain backend applications using Python and Django
- Collaborate with the team to design, build, test, and deploy features
- Debug issues and participate in daily problem-solving
- Take ownership of assigned modules or tasks
- Write clear and clean code with documentation
- Contribute to internal discussions and product planning
What we’re looking for:
- Strong programming fundamentals, especially in Python (3.7+)
- Knowledge of data structures and algorithms
- Familiarity with Git and GitHub workflows
- Experience with general-purpose languages and basic web development
- Curiosity, analytical thinking, and attention to detail
- Clear communication and a proactive, collaborative mindset
Preferred Skills (Nice to Have)
- Django
- Django REST Framework (DRF) / FastAPI (for REST APIs)
- Frontend technologies: HTML5, CSS3, Bootstrap
- JavaScript frameworks (ReactJS / VueJS / AngularJS)
- Linux environment experience
- Shell scripting and Pandas
- Data visualization tools (D3.js / Observable)
What you’ll gain:
- Real-world experience solving critical problems in the energy space and enterprise application development
- Mentorship from an experienced team and continuous hands-on learning
- Ownership of live modules and contributions to production systems
- A fast-paced, collaborative, and impact-driven work culture
- Potential pathway to a full-time opportunity


Are you passionate about the clean energy transition and looking to build real-world experience at the intersection of energy, data, and technology?
1E9 Advisors is seeking motivated Energy Analyst Interns to join our team. We’re a technology company that delivers strategy and software solutions across industries, with deep expertise in energy markets, commodities, and risk management.
This internship is ideal for candidates who are analytical, detail-oriented, and eager to explore how battery storage and market optimization work in real-world settings.
Important Note: This position is not open to applicants who are currently enrolled in full-time degree programs. This internship is designed to transition into a full-time role for successful candidates, so we are seeking candidates who are available for immediate full-time employment upon completion of the internship.
What You’ll Work On:
- Analyze and interpret energy market data, including pricing, generation, and capacity
- Track evolving US electricity market rules and support policy analysis
- Assist in valuation and optimization models for battery energy storage
- Collaborate on internal product development and customer-facing insights
What We’re Looking For:
- Strong analytical and communication skills
- Proficiency in Python and/or Excel
- Interest in energy markets, clean technology, or battery storage
- Attention to detail and a proactive mindset
Preferred Skills:
- Understanding of US electricity markets
- Hands-on experience with battery storage valuation
- Effective written and verbal communication
What You’ll Gain:
- Practical exposure to energy markets and clean tech analytics
- Mentorship and hands-on project ownership
- Experience contributing to a production-grade software platform (BatteryOS)
- Potential pathway to a full-time opportunity


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.

Job Title: Senior Python Developer – Product Engineering
Location: Pune, India
Experience Required: 3 to 7 Years
Employment Type: Full-time
Employment Agreement: Minimum 3 years (At the completion of 3 years, One Time Commitment Bonus will be applicable based on performance)
🏢 About Our Client
Our client is a leading enterprise cybersecurity company offering an integrated platform for Digital Rights Management (DRM), Enterprise File Sync and Share (EFSS), and Content-Aware Data Protection (CDP). With patented technologies for secure file sharing, endpoint encryption, and real-time policy enforcement, helps organizations maintain control over sensitive data — even after it leaves the enterprise perimeter.
🎯 Role Overview
We are looking for a skilled Python Developer with a strong product mindset and experience building scalable, secure, and performance-critical systems. You will join our core engineering team to enhance backend services powering DRM enforcement, file tracking, audit logging, and file sync engines.
This is a hands-on role for someone who thrives in a product-first, security-driven environment and wants to build technologies that handle terabytes of enterprise data across thousands of endpoints.
🛠️ Key Responsibilities
● Develop and enhance server-side services for DRM policy enforcement, file synchronization, data leak protection, and endpoint telemetry.
● Build Python-based backend APIs and services that interact with file systems, agent software, and enterprise infrastructure.
● Work on delta sync, file versioning, audit trails, and secure content preview/rendering services.
● Implement secure file handling, encryption workflows, and token-based access controls across modules.
● Collaborate with DevOps to optimize scalability, performance, and availability of core services across hybrid deployments (on-prem/cloud).
● Debug and maintain production-level services; drive incident resolution and performance optimization.
● Integrate with 3rd-party platforms such as LDAP, AD, DLP, CASB, and SIEM systems.
● Participate in code reviews, architecture planning, and mentoring junior developers.
📌 Required Skills & Experience
● 3+ years of professional experience with Python 3.x, preferably in enterprise or security domains.
● Strong understanding of multithreading, file I/O, inter-process communication, and low-level system APIs.
● Expertise in building RESTful APIs, schedulers, workers (Celery), and microservices.
● Solid experience with encryption libraries (OpenSSL, cryptography.io) and secure coding practices.
● Hands-on experience with PostgreSQL, Redis, SQLite, or other transactional and cache stores.
● Familiarity with Linux internals, filesystem hooks, journaling/logging systems, and OS-level operations.
● Experience with source control (Git), containerization (Docker/K8s), and CI/CD.
● Proven ability to write clean, modular, testable, and scalable code for production environments.
➕ Preferred/Bonus Skills
● Experience in EFSS, DRM, endpoint DLP, or enterprise content security platforms.
● Knowledge of file diffing algorithms (rsync, delta encoding) or document watermarking.
● Prior experience with agent-based software (Windows/Linux), desktop sync tools, or version control systems.
● Exposure to compliance frameworks (e.g., DPDP Act, GDPR, RBI-CSF) is a plus.
🌟 What We Offer
● Work on a patented and mission-critical enterprise cybersecurity platform
● Join a fast-paced team focused on innovation, security, and customer success
● Hybrid work flexibility with competitive compensation and growth opportunities
● Direct impact on product roadmap, architecture, and IP development
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.

We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.

Job Title : Senior Python Developer – Product Engineering
Experience : 5 to 8 Years
Location : Pune, India (Hybrid – 3-4 days WFO, 1-2 days WFH)
Employment Type : Full-time
Commitment : Minimum 3 years (with end-of-term bonus)
Openings : 2 positions
- Junior : 3 to 5 Years
- Senior : 5 to 8 Years
Mandatory Skills : Python 3.x, REST APIs, multithreading, Celery, encryption (OpenSSL/cryptography.io), PostgreSQL/Redis, Docker/K8s, secure coding
Nice to Have : Experience with EFSS/DRM/DLP platforms, delta sync, file systems, LDAP/AD/SIEM integrations
🎯 Roles & Responsibilities :
- Design and develop backend services for DRM enforcement, file synchronization, and endpoint telemetry.
- Build scalable Python-based APIs interacting with file systems, agents, and enterprise infra.
- Implement encryption workflows, secure file handling, delta sync, and file versioning.
- Integrate with 3rd-party platforms: LDAP, AD, DLP, CASB, SIEM.
- Collaborate with DevOps to ensure high availability and performance of hybrid deployments.
- Participate in code reviews, architectural discussions, and mentor junior developers.
- Troubleshoot production issues and continuously optimize performance.
✅ Required Skills :
- 5 to 8 years of hands-on experience in Python 3.x development.
- Expertise in REST APIs, Celery, multithreading, and file I/O.
- Proficient in encryption libraries (OpenSSL, cryptography.io) and secure coding.
- Experience with PostgreSQL, Redis, SQLite, and Linux internals.
- Strong command over Docker, Kubernetes, CI/CD, and Git workflows.
- Ability to write clean, testable, and scalable code in production environments.
➕ Preferred Skills :
- Background in DRM, EFSS, DLP, or enterprise security platforms.
- Familiarity with file diffing, watermarking, or agent-based tools.
- Knowledge of compliance frameworks (GDPR, DPDP, RBI-CSF) is a plus.


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics


Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes


- 5+ years of experience
- FlaskAPI, RestAPI development experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies

AccioJob is conducting a Walk-In Hiring Drive with Global Consulting and Services for the position of Python Automation Engineer.
To apply, register and select your slot here: https://go.acciojob.com/b7BZZZ
Required Skills: Excel, Python, Panda, Numpy, SQL
Eligibility:
- Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
- Branch: All
- Graduation Year: 2023, 2024, 2025
Work Details:
- Work Location: Pune (Onsite)
- CTC: 3 LPA to 6 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round,
Technical Interview 1
Technical Interview 2
Tech+Managerial Round
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/b7BZZZ
Or, apply through our newly launched app:https://go.acciojob.com/4wvBDe

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field


Job description
Opportunity to work on cutting-edge tech pieces & build from scratch
Ensure seamless performance while handling large volumes of data without system slowdowns
Collaborate with cross-functional teams to meet business goals
Required Skills:
Frontend: ReactJS (Next.js must)
Backend: Exp in Node.js, Python, or Java
Databases: SQL (must), MongoDB (nice to have)
Caching & Messaging: Redis, Kafka, or Cassandra exp
Cloud certification is a bonus



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.


AccioJob is conducting a Walk-In Hiring Drive with Atomic Loops for the position of AI/ML Developer Intern.
To apply, register, and select your slot here: https://go.acciojob.com/E8wPb8
Required Skills: Python, AI, Prompting, ML understanding
Eligibility: ALL
Degree: ALL
Branch: ALL
Graduation Year: 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026
Work Details:
- Work Location: Pune (Onsite)
- CTC: 4 LPA to 5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round, Company Side Process
Company Side Process
2 rounds will be for the intern role, and 3 rounds will be for the full-time role (Virtual or Face-to-Face)
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/E8wPb8

AccioJob is conducting a Walk-In Hiring Drive with Atomic Loops for the position of Python Developer Intern.
To apply, register and select your slot here: https://go.acciojob.com/Bg2vSq
Required Skills: Python, Django, FastAPI
Eligibility: ALL
Degree: ALL
Branch: ALL
Graduation Year: 2025, 2026
Work Details:
- Work Location: Pune (Onsite)
- CTC: 3.6 LPA to 4.2 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round, Interview Round 1, Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/Bg2vSq


About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.

Job Title: Developer
Work Location: Pune, MH
Skills Required: Azure Data Factory
Experience Range in Required Skills: 6-8 Years
Job Description: Azure, ADF, Databricks, Python
Essential Skills: Azure, ADF, Databricks, Python
Desirable Skills: Azure, ADF, Databricks, Python

Job Summary:
We are seeking a skilled Python Developer with a strong foundation in Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying intelligent systems that leverage large datasets and cutting-edge ML algorithms to solve real-world problems.
Key Responsibilities:
- Design and implement machine learning models using Python and libraries like TensorFlow, PyTorch, or Scikit-learn.
- Perform data preprocessing, feature engineering, and exploratory data analysis.
- Develop APIs and integrate ML models into production systems using frameworks like Flask or FastAPI.
- Collaborate with data scientists, DevOps engineers, and backend teams to deliver scalable AI solutions.
- Optimize model performance and ensure robustness in real-time environments.
- Maintain clear documentation of code, models, and processes.
Required Skills:
- Proficiency in Python and ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch).
- Strong understanding of ML algorithms (classification, regression, clustering, deep learning).
- Experience with data pipeline tools (e.g., Airflow, Spark) and cloud platforms (AWS, Azure, or GCP).
- Familiarity with containerization (Docker, Kubernetes) and CI/CD practices.
- Solid grasp of RESTful API development and integration.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or related field.
- 2–5 years of experience in Python development with a focus on AI/ML.
- Exposure to MLOps practices and model monitoring tools.

Job Purpose
Responsible for managing end-to-end database operations, ensuring data accuracy, integrity, and security across systems. The position plays a key role in driving data reliability, availability, and compliance with operational standards.
Key Responsibilities:
- Collate audit reports from the QA team and structure data in accordance with Standard Operating Procedures (SOP).
- Perform data transformation and validation for accuracy and consistency.
- Upload processed datasets into SQL Server using SSIS packages.
- Monitor and optimize database performance, identifying and resolving bottlenecks.
- Perform regular backups, restorations, and recovery checks to ensure data continuity.
- Manage user access and implement robust database security policies.
- Oversee database storage allocation and utilization.
- Conduct routine maintenance and support incident management, including root cause analysis and resolution.
- Design and implement scalable database solutions and architecture.
- Create and maintain stored procedures, views, and other database components.
- Optimize SQL queries for performance and scalability.
- Execute ETL processes and support seamless integration of multiple data sources.
- Maintain data integrity and quality through validation and cleansing routines.
- Collaborate with cross-functional teams on data solutions and project deliverables.
Educational Qualification: Any Graduate
Required Skills & Qualifications:
- Proven experience with SQL Server or similar relational database platforms.
- Strong expertise in SSIS, ETL processes, and data warehousing.
- Proficiency in SQL/T-SQL, including scripting, performance tuning, and query optimization.
- Experience in database security, user role management, and access control.
- Familiarity with backup/recovery strategies and database maintenance best practices.
- Strong analytical skills with experience working with large and complex datasets.
- Solid understanding of data modeling, normalization, and schema design.
- Knowledge of incident and change management processes.
- Excellent communication and collaboration skills.
- Experience with Python for data manipulation and automation is a strong plus.


POSITION / TITLE: Data Science Lead
Location: Offshore – Hyderabad/Bangalore/Pune
Who are we looking for?
Individuals with 8+ years of experience implementing and managing data science projects. Excellent working knowledge of traditional machine learning and LLM techniques.
The candidate must demonstrate the ability to navigate and advise on complex ML ecosystems from a model building and evaluation perspective. Experience in NLP and chatbots domains is preferred.
We acknowledge the job market is blurring the line between data roles: while software skills are necessary, the emphasis of this position is on data science skills, not on data-, ML- nor software-engineering.
Responsibilities:
· Lead data science and machine learning projects, contributing to model development, optimization and evaluation.
· Perform data cleaning, feature engineering, and exploratory data analysis.
· Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.
· Collaborate with other DS and engineers to deliver projects.
Technical Skills – Must have:
· Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.
· Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.
· Ability to translate complex machine learning problem statements into specific deliverables and requirements.
· Should have worked with major cloud platforms such as AWS, Azure or GCP.
· Working knowledge of SQL and no-SQL databases.
· Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.
· Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.
· Strong understanding of evaluation and monitoring metrics for machine learning projects.
Technical Skills – Good to have:
· Track record of getting ML models into production
· Experience building chatbots.
· Experience with closed and open source LLMs.
· Experience with frameworks and technologies like scikit-learn, BERT, langchain, autogen…
· Certifications or courses in data science.
Education:
· Master’s/Bachelors/PhD Degree in Computer Science, Engineering, Data Science, or a related field.
Process Skills:
· Understanding of Agile and Scrum methodologies.
· Ability to follow SDLC processes and contribute to technical documentation.
Behavioral Skills :
· Self-motivated and capable of working independently with minimal management supervision.
· Well-developed design, analytical & problem-solving skills
· Excellent communication and interpersonal skills.
· Excellent team player, able to work with virtual teams in several time zones.


- Strong AI/ML OR Software Developer Profile
- Mandatory (Experience 1) - Must have 3+ YOE in Core Software Developement (SDLC)
- Mandatory (Experience 2) - Must have 2+ years of experience in AI/ML, preferably in conversational AI domain (spped to text, text to speech, speech emotional recognition) or agentic AI systems.
- Mandatory (Experience 3) - Must have hands-on experience in fine-tuning LLMs/SLM, model optimization (quantization, distillation) and RAG
- Mandatory (Experience 4) - Hands-on Programming experience in Python, TensorFlow, PyTorch and model APIs (Hugging Face, LangChain, OpenAI, etc


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.

Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.