50+ Python Jobs in Pune | Python Job openings in Pune
Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Minimum requirements
5+ years of industry software engineering experience (does not include internships nor includes co-ops)
Strong coding skills in any programming language (we understand new languages can be learned on the job so our interview process is language agnostic)
Strong collaboration skills, can work across workstreams within your team and contribute to your peers’ success
Have the ability to thrive on a high level of autonomy, responsibility, and think of yourself as entrepreneurial
Interest in working as a generalist across varying technologies and stacks to solve problems and delight both internal and external users
Preferred Qualifications
Experience with large-scale financial tracking systems
Good understanding and practical knowledge in cloud based services (e.g. gRPC, GraphQL, Docker/Kubernetes, cloud services such as AWS, etc.)



Job Title: Python Full Stack Developer
Location: Pune (Work from Office)
Experience: Minimum 5 Years
Job Summary:
We are looking for a highly skilled Python Full Stack Developer with a minimum of 5 years of hands-on experience. The ideal candidate will be proficient in building scalable web applications and APIs using Python technologies, and comfortable working with front-end frameworks like Angular or React. Experience with DevOps tools and practices is a plus.
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications using Python.
- Develop RESTful APIs using Flask.
- Build responsive front-end interfaces using Angular or React.
- Integrate front-end components with server-side logic.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Ensure the performance, quality, and responsiveness of applications.
- Implement DevOps practices for CI/CD and deployment automation.
- Participate in code reviews, testing, and bug fixing.
Required Skills:
- Python (5+ years)
- Flask or similar frameworks
- REST APIs development
- Angular or React (Front-end development)
- DevOps tools and practices (CI/CD, Docker, etc.)
- Strong understanding of software development best practices and design patterns
Eligibility Criteria:
- Minimum 5 years of overall experience in software development
- At least 5 years of strong hands-on experience in Python full stack development
Work Mode:
- Work from Office – Pune location

AccioJob is conducting an offline hiring drive in partnership with Our Partner Company to hire Junior Business/Data Analysts for an internship with a Pre-Placement Offer (PPO) opportunity.
Apply, Register and select your Slot here: https://go.acciojob.com/69d3Wd
Job Description:
- Role: Junior Business/Data Analyst (Internship + PPO)
- Work Location: Hyderabad
- Internship Stipend: 15,000 - 25,000/month
- Internship Duration: 3 months
- CTC on PPO: 5 LPA - 6 LPA
Eligibility Criteria:
- Degree: Open to all academic backgrounds
- Graduation Year: 2023, 2024, 2025
Required Skills:
- Proficiency in SQL, Excel, Power BI, and basic Python
- Strong analytical mindset and interest in solving business problems with data
Hiring Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Pune, Noida)
- 1 Assignment + 2 Technical Interviews (Virtual; In-person for Hyderabad candidates)
Note: Please bring your laptop and earphones for the test.
Register Here: https://go.acciojob.com/69d3Wd
AccioJob is organizing an exclusive offline hiring drive in collaboration with GameBerry Labs for the role of Software Development Engineer 1 (SDE 1).
To Apply, Register and select your Slot here: https://go.acciojob.com/Zq2UnA
Job Description:
- Role: SDE 1
- Work Location: Bangalore
- CTC: 10 LPA - 15 LPA
Eligibility Criteria:
- Education: B.Tech, BE, BCA, MCA, M.Tech
- Branches: Circuit Branches (CSE, ECE, IT, etc.)
- Graduation Year:
- 2024 (Minimum 9 months of experience)
- 2025 (Minimum 3-6 months of experience)
Evaluation Process:
- Offline Assessment at AccioJob Skill Centres (Hyderabad, Bangalore, Pune, Noida)
- Technical Interviews (2 Rounds - Virtual for most; In-person for Bangalore candidates)
Note: Carry your laptop and earphones for the assessment.
Register Here: https://go.acciojob.com/Zq2UnA

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills

Key Technical Skillsets-
- Design, develop, and maintain scalable applications using AWS services, Python, and Boto3.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Implement best practices for cloud architecture and application development.
- Optimize applications for maximum speed and scalability.
- Troubleshoot and resolve issues in development, test, and production environments.
- Write clean, maintainable, and efficient code.
- Participate in code reviews and contribute to team knowledge sharing.




Job Overview:
We are looking for a skilled professional with:
- 7+ years of overall experience, including minimum 5 years in Computer Vision, Machine Learning, Deep Learning, and algorithm development.
- Proficiency in Data Science and Data Analysis techniques.
- Hands-on programming experience with Python, R, MATLAB or Octave.
- Experience with AI frameworks like TensorFlow, PySpark, Theano, and libraries such as PyTorch, Pandas, NumPy, etc.
- Strong understanding of algorithms like Regression, SVM, Decision Trees, KNN, and Neural Networks.
Key Skills & Attributes:
- Fast learner with strong problem-solving abilities
- Innovative thinking and approach
- Excellent communication skills
- High standards of integrity, accountability, and transparency
- Exposure to or experience with international work environments
Notice Period : Immediate to 30Days

Python Automation Engineer -
JD :
- Engage with development teams to improve the quality of the application.
- Provide in-depth technical mentoring across the test automation team.
- Provide highly innovative solutions to automatically qualify the application.
- Routinely exercise independent judgment in test automation methods, techniques and criteria for achieving objectives.
Experience/Exposure:
- Mid-level programming skills in Python
- Experience with UI driven test automation framework such as selenium, Playwright
- Experience with CI/CD tool
- Ability to troubleshoot complex software / hardware configuration problems
- Strong analytical & problem solving, documentation, and communication skills
- Passion for product quality and eagerness to learn new technologies.
- Ability to function effectively in a fast-paced environment and manage continuously changing business needs. Excellent time management skills require


- Design and implement cloud solutions, build MLOps on Azure
- Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
- Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality
- Data science models testing, validation and tests automation
- Deployment of code and pipelines across environments
- Model performance metrics
- Service performance metrics
- Communicate with a team of data scientists, data engineers and architect, document the processes


Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes


Who are we a.k.a “About Cambridge Wealth” :
We are an early stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Bakerstreet Fintech might be the place for you. We have a flat, ownership-oriented culture, and deliver world class quality. You will be working with a founding team that has delivered over 26 industry leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team.
What are we looking for a.k.a “The JD” :
We are looking for a motivated and energetic Flutter Intern who will be running and designing product application features across various cross platform devices. Just like Lego boxes that fit on top of one another, we are looking out for someone who has experience using Flutter widgets that can be plugged together, customised and deployed anywhere.
What will you be doing at CW a.k.a “Your Responsibilities :
- Create multi-platform apps for iOS / Android using Flutter Development Framework.
- Participation in the process of analysis, designing, implementation and testing of new apps.
- Apply industry standards during the development process to ensure high quality.
- Translate designs and wireframes into high quality code.
- Ensure the best possible performance, quality, and responsiveness of the application.
- Help maintain code quality, organisation, and automatisation.
- Work on bug fixing and improving application performance
What should our ideal candidate have a.k.a “Your Requirements”:
- Knowledge of mobile app development.
- Worked at any stage startup or have developed projects of their own ideas.
- Good knowledge of Flutter and interest in developing mobile applications.
- Available for full time (in-office) internship.
Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:
- You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and processes from the ground up.
- You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
- This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements
- You want complete ownership for your role & be able to drive it the way you think is right. You are looking to stick around for the long term and grow with the company.
- You have the ability to be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
Speed-track your application process by completing the 40 min test at the link below:
https://app.testgorilla.com/s/itrlc3m2
On successfully clearing the above, there is 20-30 min video interview followed by a technical interview and meeting with the founder at the office. You may be requested to complete a brief in-person exercise at that point.
Please note that this is an On-site/ Work from Office opportunity at our headquarters at Prabhat Road, Pune
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.

Required Skills:
- Hands-on experience with Databricks, PySpark
- Proficiency in SQL, Python, and Spark.
- Understanding of data warehousing concepts and data modeling.
- Experience with CI/CD pipelines and version control (e.g., Git).
- Fundamental knowledge of any cloud services, preferably Azure or GCP.
Good to Have:
- Bigquery
- Experience with performance tuning and data governance.

Required Qualifications:
- 5+ years of professional software development experience.
- Post-secondary degree in computer science, software engineering or related discipline, or equivalent working experience.
- Development of distributed applications with Microsoft technologies: C# .NET/Core, SQL Server, Entity Framework.
- Deep expertise with microservices architectures and design patterns.
- Cloud Native AWS experience with services such as Lambda, SQS, RDS/Aurora, S3, Lex, and Polly.
- Mastery of both Windows and Linux environments and their use in the development and management of complex distributed systems architectures.
- Git source code repository and continuous integration tools.

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Infrastructure Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/kcYTAp
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Linux, Networking, One scripting language among Python, Bash, and PowerShell, OOPs, Cloud Platforms (AWS, Azure)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE Core With Cloud Certification
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/kcYTAp

AccioJob is conducting a Walk-In Hiring Drive with a reputed global IT consulting company at AccioJob Skill Centres for the position of Data Engineer, specifically for female candidates.
To Apply, Register and select your Slot here: https://go.acciojob.com/8p9ZXN
We will not consider your application if you do not register and select slot via the above link.
Required Skills: Python, Database(MYSQL), Big Data(Spark, Kafka)
Eligibility:
- Degree: B.Tech/BE
- Branch: CSE – AI & DS / AI & ML
- Graduation Year: 2024 & 2025
Note: Only Female Candidates can apply for this job opportunity
Work Details:
- Work Mode: Work From Office
- Work Location: Bangalore & Coimbatore
- CTC: 11.1 LPA
Evaluation Process:
- Round 1: Offline Assessment at AccioJob Skill Centre in Noida, Pune, Hyderabad.
- Further Rounds (for Shortlisted Candidates only)
- HackerRank Online Assessment
- Coding Pairing Interview
- Technical Interview
- Cultural Alignment Interview
Important Note: Please bring your laptop and earphones for the test.
Register here: https://go.acciojob.com/8p9ZXN

Test Automation Engineer Job Description
A Test Automation Engineer is responsible for designing, developing, and implementing automated testing solutions to ensure the quality and reliability of software applications. Here's a breakdown of the job:
Key Responsibilities
- Test Automation Framework: Design and develop test automation frameworks using tools like Selenium, Appium, or Cucumber.
- Automated Test Scripts: Create and maintain automated test scripts to validate software functionality, performance, and security.
- Test Data Management: Develop and manage test data, including data generation, masking, and provisioning.
- Test Environment: Set up and maintain test environments, including configuration and troubleshooting.
- Collaboration: Work with cross-functional teams, including development, QA, and DevOps to ensure seamless integration of automated testing.
Essential Skills
- Programming Languages: Proficiency in programming languages like Java, Python, or C#.
- Test Automation Tools: Experience with test automation tools like Selenium,.
- Testing Frameworks: Knowledge of testing frameworks like TestNG, JUnit, or PyUnit.
- Agile Methodologies: Familiarity with Agile development methodologies and CI/CD pipelines.

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.

What You’ll Do:
As a Data Scientist, you will work closely across DeepIntent Analytics teams located in New York City, India, and Bosnia. The role will support internal and external business partners in defining patient and provider audiences, and generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include creating and scoring audiences, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to to create better audiences
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights
- Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
- Design and deploy new iterations of production-level code
- Contribute posts to our upcoming technical blog
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred
- 3+ years of working experience as Data Analyst, Data Engineer, Data Scientist in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
- Background in either data engineering or analytics
- Hands on technical experience is required, proficiency in performing statistical analysis in Python, including relevant libraries, required
- You have an advanced understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
- Experience in programmatic, DSP related, marketing predictive analytics, audience segmentation or audience behaviour analysis or medical / healthcare experience
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference)
- Familiarity with data science tools such as, Xgboost, pytorch, Jupyter and strong LLM user experience (developer/API experience is a plus)
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing




About Data Axle
Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases.
Data Axle India is recognized as a Great Place to Work! This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive.
General Summary
We are looking for a Full stack developer who will be responsible for:
Roles & Responsibilities
- Implement application components and systems according to department standards and guidelines.
- Work with product and designers to translate requirements into accurate representations for the web.
- Analyze, design, code, debug, and test business applications.
- Code reviews in accordance with team processes/standards.
- Perform other miscellaneous duties as assigned by Management.
Qualifications
- 3+ years of Software Engineering experience required.
- Bachelor’s degree in computer science, Engineering, or a related field.
- Experience in developing web applications using Django. Strong knowledge of ReactJS and related libraries such as Redux. Proficient in HTML, CSS,and JavaScript.
- Experience in working with SQL databases such as MySQL.
- Strong problem-solving skills and attention to detail.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, focusing on Large Language Models (LLMs) and other deep learning applications.
- End-to-End Model Development: Lead the entire lifecycle of AI models—from data collection and preprocessing to training, fine-tuning, evaluation, and deployment.
- Fine-Tuning & Customization: Leverage techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications.
- Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1, exploring their applications in enterprise AI workflows.
- Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes.
- Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications.
- MLOps & CI/CD Pipelines: Implement best practices for MLOps, ensuring automated training, deployment, monitoring, and continuous improvement of AI models.
- Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable.
- API Development & Microservices: Develop RESTful APIs and microservices to integrate AI models seamlessly into enterprise applications.
- Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines.
- Collaboration & Stakeholder Engagement: Work closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions.
- Mentorship & Technical Leadership: Guide and mentor junior engineers, fostering best practices in AI/ML development, model fine-tuning, and software engineering.
- Research & Innovation: Stay updated with emerging AI trends, conduct experiments with cutting-edge architectures and fine-tuning techniques, and drive innovation within the team.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 5-8 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are looking for a Senior Software Development Engineer with 5-8 years of experience specializing in infrastructure deployment automation and VMware workload migration. The ideal candidate will have expertise in Infrastructure-as-Code (IaC), VMware vSphere, vMotion, HCX, Terraform, Kubernetes, and AI POD managed services. You will be responsible for automating infrastructure provisioning, migrating workloads from VMware environments to cloud and hybrid infrastructures, and optimizing AI/ML deployments.
Key Roles & Responsibilities
- Automate infrastructure deployment using Terraform, Ansible, and Helm for VMware and cloud environments.
- Develop and implement VMware workload migration strategies, including vMotion, HCX, SRM (Site Recovery Manager), and lift-and-shift migrations.
- Migrate VMware-based workloads to public cloud (AWS, Azure, GCP) or hybrid cloud environments.
- Optimize and manage AI POD workloads on VMware and Kubernetes-based environments.
- Leverage VMware HCX for live and bulk workload migrations, ensuring minimal downtime and optimal performance.
- Automate virtual machine provisioning and lifecycle management using VMware vSphere APIs, PowerCLI, or vRealize Automation.
- Integrate VMware workloads with Kubernetes for containerized AI/ML workflows.
- Ensure workload high availability and disaster recovery post-migration using VMware SRM, vSAN, and backup strategies.
- Monitor and troubleshoot migration performance using vRealize Operations, Prometheus, Grafana, and ELK.
- Develop and optimize CI/CD pipelines to automate workload migration, deployment, and validation.
- Ensure security and compliance for workloads before, during, and after migration.
- Collaborate with cloud architects to design hybrid cloud solutions supporting AI/ML workloads.
Basic Qualifications
- 5–8 years of experience in infrastructure automation, VMware workload migration, and cloud integration.
- Expertise in VMware vSphere, ESXi, vMotion, HCX, SRM, vSAN, and NSX-T.
- Hands-on experience with workload migration tools such as VMware HCX, CloudEndure, AWS Application Migration Service, and Azure Migrate.
- Proficiency in Infrastructure-as-Code using Terraform, Ansible, PowerCLI, and vRealize Automation.
- Strong experience with Kubernetes (EKS, AKS, GKE) and containerized AI/ML workloads.
- Experience in public cloud migration (AWS, Azure, GCP) for VMware-based workloads.
- Hands-on knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Strong scripting and automation skills in Python, Bash, or PowerShell.
- Familiarity with disaster recovery, backup, and business continuity planning in VMware environments.
- Experience in performance tuning and troubleshooting for VMware-based workloads.
Preferred Qualifications
- Experience with NVIDIA GPU orchestration (e.g., KubeFlow, Triton, RAPIDS).
- Familiarity with Packer for automated VM image creation.
- Exposure to Edge AI deployments, federated learning, and AI inferencing at scale.
- Contributions to open-source infrastructure automation projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced Staff Engineer with expertise in SONiC (Software for Open Networking in the Cloud), Networking, Security, and Linux. The ideal candidate will have a deep understanding of data plane and control plane networking, security mechanisms, and open-source networking stacks. You will play a crucial role in designing, developing, and optimizing high-performance networking solutions based on SONiC, working on switch OS internals, and ensuring security at all levels.
Key Roles & Responsibilities
- Design, develop, and optimize SONiC-based networking solutions for data center and cloud environments.
- Contribute to SONiC’s Control Plane, Data Plane, SAI (Switch Abstraction Interface), and integration with ASICs.
- Develop and enhance network security mechanisms, including ACLs, firewall rules, and secure communication protocols.
- Work with Linux kernel networking stack, DPDK, eBPF, and other high-performance packet processing frameworks.
- Integrate and optimize FRR (Free Range Routing), BGP, OSPF, and other routing protocols within SONiC.
- Collaborate with ASIC vendors to integrate new chipsets with SONiC through SAI API development.
- Drive software development using C, C++, Python, and Go for various networking and security features.
- Optimize Netfilter, iptables, nftables, and XDP/eBPF for security and performance enhancements.
- Design and implement Zero Trust Security models for networking and cloud infrastructure.
- Work on containerized networking (CNI), Kubernetes networking, and SDN solutions.
- Debug and troubleshoot networking and security issues using tcpdump, Wireshark, gdb, strace, and perf tools.
- Contribute to open-source networking projects and work with the SONiC community.
Basic Qualifications
- A Bachelor’s or Master’s degree in Computer Science, Electronics Engineering, or a related field.
- 8–12 years of experience in networking software development, security, and Linux systems programming.
- Strong expertise in SONiC architecture, SAI, and open networking platforms.
- Proficiency in L2/L3 networking protocols (BGP, OSPF, MPLS, VXLAN, EVPN, etc.).
- Strong knowledge of network security concepts, including firewalling, VPNs, and DDoS mitigation.
- Experience with Linux networking internals, Netfilter, iptables, nftables, XDP, and eBPF.
- Proficiency in C, C++, Python, and Go for networking software development.
- Strong debugging skills using tcpdump, Wireshark, gdb, strace, perf, and ASAN.
- Experience working with network ASICs (Broadcom, Mellanox, Marvell, or Intel-based chipsets).
- Good understanding of container networking, Kubernetes CNI, and SDN concepts.
- Hands-on experience with CI/CD, Git, Jenkins, and automated testing frameworks.
Preferred Qualifications
- Experience in DPDK, P4 programming, and FPGA-based networking solutions.
- Contributions to open-source networking projects (SONiC, FRR, Linux kernel, etc.).
- Knowledge of TLS, IPSec, MACsec, and secure boot mechanisms.
- Experience working with public cloud networking (AWS, Azure, GCP).

Position: AWS Data Engineer
Experience: 5 to 7 Years
Location: Bengaluru, Pune, Chennai, Mumbai, Gurugram
Work Mode: Hybrid (3 days work from office per week)
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and motivated AWS Data Engineer with 5–7 years of experience in building and optimizing data pipelines, architectures, and data sets. The ideal candidate will have strong experience with AWS services including Glue, Athena, Redshift, Lambda, DMS, RDS, and CloudFormation. You will be responsible for managing the full data lifecycle from ingestion to transformation and storage, ensuring efficiency and performance.
Key Responsibilities:
- Design, develop, and optimize scalable ETL pipelines using AWS Glue, Python/PySpark, and SQL.
- Work extensively with AWS services such as Glue, Athena, Lambda, DMS, RDS, Redshift, CloudFormation, and other serverless technologies.
- Implement and manage data lake and warehouse solutions using AWS Redshift and S3.
- Optimize data models and storage for cost-efficiency and performance.
- Write advanced SQL queries to support complex data analysis and reporting requirements.
- Collaborate with stakeholders to understand data requirements and translate them into scalable solutions.
- Ensure high data quality and integrity across platforms and processes.
- Implement CI/CD pipelines and best practices for infrastructure as code using CloudFormation or similar tools.
Required Skills & Experience:
- Strong hands-on experience with Python or PySpark for data processing.
- Deep knowledge of AWS Glue, Athena, Lambda, Redshift, RDS, DMS, and CloudFormation.
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Familiarity with serverless architectures and AWS best practices.
- Experience in designing and maintaining robust data architectures and data lakes.
- Ability to troubleshoot and resolve data pipeline issues efficiently.
- Strong communication and stakeholder management skills.

KEY DUTIES
- Independently own and resolve high-priority or complex customer issues with minimal supervision
- Reproduce and analyze product defects using advanced troubleshooting techniques and tools
- Collaborate with developers to identify root causes and drive timely resolution of defects
- Identify trends in escalations and provide feedback to improve product quality and customer experience
- Document investigation findings, root causes, and resolution steps clearly for both internal and external audiences
- Contribute to knowledge base articles and process improvements to enhance team efficiency
- Represent the escalation team in product reviews or defect triage meetings
- Build subject matter expertise in specific products or components
- Mentor and assist junior team members by reviewing their investigations and coaching through complex cases
- Participate in Agile ceremonies and contribute to team planning and backlog refinement
- Other duties as assigned
BASIC QUALIFICATIONS
- Typically requires 3–6 years of technical experience in a support, development, or escalation role
- Strong technical troubleshooting and root cause analysis skills
- Proficient in debugging tools, logs, and test environments
- Ability to independently manage multiple complex issues and drive them to closure
- Experience working with cross-functional teams in a collaborative, Agile environment
- Proficiency with relevant scripting or programming languages (e.g., Python, Bash, PowerShell, Java)
- Exceptional written and verbal communication skills — especially when engaging with customers in critical or escalated situations
- Demonstrated customer-first mindset with an emphasis on clarity, empathy, and follow- through
- Proactive and detail-oriented, with the ability to document and communicate technical concepts clearly
- Comfortable presenting findings or recommendations to both technical and non-technical stakeholders


Mandatory Skills
- Efficiently able to design and implement software features.
- Expertise in at least one Object Oriented Programming language (Python, typescript, Java, Node.js, Angular, react.js C#, C++).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilisation of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Identify bottlenecks and bugs, and devise appropriate solutions.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications of Cyber Security on the production.
- Experience architecting & estimating deep technical custom solutions & integrations.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.

We are looking for passionate developers with 4 - 8 years of experience in software development to join Metron Security team as Software Engineer.
Metron Security provides automation and integration services to leading Cyber Security companies. Our engineering team works on leading security platforms including - Splunk, IBM’s QRadar, ServiceNow, Crowdstrike, Cybereason, and other SIEM and SOAR platforms.
Software Engineer is a challenging role within Cyber Security Engineering integration development. The role involves developing a product/service that achieves high performance data exchange between two or more Cyber Security platforms. A Software Engineer is responsible for End-to-End delivery of the project, right from getting the requirements from customer to deploying the project for them on prem or on cloud, depending on the nature of the project. We follow the best practices of Engineering and keep evolving, we are agile. The Software Engineer is at the core of the evolution process.
Each integration needs reskilling yourself with the required technology for that project. If you are passionate about programming and believe in the best practices of software engineering, following are the skills we are looking for:
- Developer-centric culture - No bureaucracy and red-tapes
- Chance to work on 200+ security platform and more
- Opportunity to engage with end-users (customers) and just a cog in the wheel
Position: Senior Software Engineer
Location: Pune
Mandatory Skills
- Efficiently able to design and implement software features.
- Expertise in at least one Object Oriented Programming language (Python, typescript, Java, Node.js, Angular, react.js C#, C++).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilisation of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Identify bottlenecks and bugs, and devise appropriate solutions.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications of Cyber Security on the production.
- Experience architecting & estimating deep technical custom solutions & integrations.
Added advantage:
- You have experience in Cyber Security domain.
- You have developed software using web technologies.
- You have handled a project from start to end.
- You have worked in an Agile Development project and have experience of writing and estimating User Stories.
- Contribution to open source - Please share your link in the application/resume.


Job Summary:
- We are looking for a highly motivated and skilled Software Engineer to join our team.
- This role requires a strong understanding of the software development lifecycle, proficiency in coding, and excellent communication skills.
- The ideal candidate will be responsible for production monitoring, resolving minor technical issues, collecting client information, providing effective client interactions, and supporting our development team in resolving challenges
Key Responsibilities:
- Client Interaction: Serve as the primary point of contact for client queries, provide excellent communication, and ensure timely issue resolution.
- Issue Resolution: Troubleshoot and resolve minor issues related to software applications in a timely manner.
- Information Collection: Gather detailed technical information from clients, understand the problem context, and relay the information to the development leads for further action.
- Collaboration: Work closely with development leads and cross-functional teams to provide timely support and resolution for customer issues.
- Documentation: Document client issues, actions taken, and resolutions for future reference and continuous improvement.
- Software Development Lifecycle: Be involved in maintaining, supporting, and optimizing software through its lifecycle, including bug fixes and enhancements.
- Automating Redundant Support Tasks: (good to have) Should be able to automate the redundant repetitive tasks Required Skills and Qualifications:
Mandatory Skills:
- Expertise in at least one Object Oriented Programming language (Python, Java, C#, C++, Reactjs, Nodejs).
- Good knowledge on Data Structure and their correct usage.
- Open to learn any new software development skill if needed for the project.
- Alignment and utilization of the core enterprise technology stacks and integration capabilities throughout the transition states.
- Participate in planning, definition, and high-level design of the solution and exploration of solution alternatives.
- Define, explore, and support the implementation of enablers to evolve solution intent, working directly with Agile teams to implement them.
- Good knowledge on the implications.
- Experience architecting & estimating deep technical custom solutions & integrations.
Added advantage:
- You have developed software using web technologies.
- You have handled a project from start to end.
- You have worked in an Agile Development project and have experience of writing and estimating User Stories
- Communication Skills: Excellent verbal and written communication skills, with the ability to clearly explain technical issues to non-technical clients.
- Client-Facing Experience: Strong ability to interact with clients, gather necessary information, and ensure a high level of customer satisfaction.
- Problem-Solving: Quick-thinking and proactive in resolving minor issues, with a focus on providing excellent user experience.
- Team Collaboration: Ability to collaborate with development leads, engineering teams, and other stakeholders to escalate complex issues or gather additional technical support when required.
Preferred Skills:
- Familiarity with Cloud Platforms and Cyber Security tools: Knowledge of cloud computing platforms and services (AWS, Azure, Google Cloud) and Cortex XSOAR, SIEM, SOAR, XDR tools is a plus.
- Automation and Scripting: Experience with automating processes or writing scripts to support issue resolution is an advantage.
Work Environment:
- This is a rotational shift position
- During evening shift the timings will be (5 PM to 2 AM), and you will be expected to work independently and efficiently during these hours.
- The position may require occasional weekend shifts depending on the project requirements.
- Additional benefit of night allowance.

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.

ABOUT US
We are a fast-growing, excellence-oriented mutual fund distribution and fintech firm delivering exceptional solutions to domestic/NRI/retail and ultra-HNI clients. Cambridge Wealth is a respected brand in the wealth segment, having won awards from BSE and Mutual Fund houses. Learn more about us at www.cambridgewealth.in
JOB OVERVIEW
Drive product excellence through data-backed decisions while ensuring efficient delivery and continuous improvement.
KEY RESPONSIBILITIES
- Sprint & Timeline Management: Drive Agile sprints with clear milestones to prevent scope creep
- Process Optimization: Identify bottlenecks early and implement standardised workflows
- Market Research: Analyze competitive landscape and customer preferences to inform strategy
- Feature Development: Refine product features based on customer feedback and data analysis
- Performance Analysis: Create actionable dashboards tracking KPIs and user behavior metrics
- Risk Management: Proactively identify potential roadblocks and develop contingency plans
- User Testing: Conduct testing sessions and translate feedback into product improvements
- Documentation: Develop comprehensive specs and user stories for seamless implementation
- Cross-Team Coordination: Align stakeholders on priorities and deliverables throughout development
TECHNICAL REQUIREMENTS
- Data Analysis: SQL proficiency for data extraction and manipulation
- Project Management: Expert in Agile methods and tracking tools
- Advanced Excel/Google/Zoho sheets: Expertise in pivot tables, VLOOKUP, and complex formulas
- Analytics Platforms: Experience with Mixpanel, Amplitude, or Google Analytics, Zoho Analytics
- Financial Knowledge: Understanding of mutual funds and fintech industry metrics
QUALIFICATIONS
- 2+ years experience in product analysis or similar role
- Strong analytical skills with the ability to collect, analyse, and interpret data from various sources.
- Basic understanding of user experience (UX) principles and methodologies.
- Excellent verbal and written communication skills for translating complex findings
- Ability to work collaboratively in a team environment and adapt to changing priorities.
- Eagerness to learn, take initiative, and contribute ideas to improve products and processes
READY TO SHAPE THE FUTURE OF FINTECH?
Apply now to join our award-winning team
Our Hiring Process:
- You Apply and answer a couple of quick questions [5 min]
- Recruiter screening phone interview [30 min]
- Online Technical assessment [60 min]
- Technical interview [45 min]
- Founder's interview [30 min]
- We make you an offer and proceed for reference and BGV check
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Job Title: Python Django Microservices Lead
Job Title: Django Backend Lead Developer
Location: Indore/ Pune (Hybrid - Wednesday and Thursday WFO)
Timings - 12.30 to 9.30 PM
Experience Level: 8+ Years
Job Overview: We are seeking an experienced Django Backend Lead Developer to join our team. The ideal candidate will have a strong background in backend development, cloud technologies, and big data
processing. This role involves leading technical projects, mentoring junior developers, and ensuring the delivery of high-quality solutions.
Responsibilities:
Lead the development of backend systems using Django.
Design and implement scalable and secure APIs.
Integrate Azure Cloud services for application deployment and management.
Utilize Azure Databricks for big data processing and analytics.
Implement data processing pipelines using PySpark.
Collaborate with front-end developers, product managers, and other stakeholders to deliver comprehensive solutions.
Conduct code reviews and ensure adherence to best practices.
Mentor and guide junior developers.
Optimize database performance and manage data storage solutions.
Ensure high performance and security standards for applications.
Participate in architecture design and technical decision-making.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
8+ years of experience in backend development.
8+ years of experience with Django.
Proven experience with Azure Cloud services.
Experience with Azure Databricks and PySpark.
Strong understanding of RESTful APIs and web services.
Excellent communication and problem-solving skills.
Familiarity with Agile methodologies.
Experience with database management (SQL and NoSQL).
Skills: Django, Python, Azure Cloud, Azure Databricks, Delta Lake and Delta tables, PySpark, SQL/NoSQL databases, RESTful APIs, Git, and Agile methodologies

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- AI/ML Engineer / Intern - Python, Fast API, Flask/Django, PyTorch, TensorFlow, Scikit-learn, GenAI Tools
Apply Now: https://links.acciojob.com/44MJQSB
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/44MJQSB
Important: Please bring your laptop & earphones for the test.


AccioJob is conducting an offline hiring drive with OneLab Ventures for the position of:
- Python Full Stack Engineer / Intern - Python, Fast API, Flask/Django, HTML, CSS, JavaScript, and frameworks like React.js or Node.js
Apply Now: https://links.acciojob.com/4d0Gtd6
Eligibility:
- Degree: BTech / BSc / BCA / MCA / MTech / MSc / BCS / MCS
- Graduation Year:
- For Interns - 2024 and 2025
- For experienced - 2024 and before
- Branch: All Branches
- Location: Pune (work from office)
Salary:
- For interns - 25K for 6 months and 5- 6 LPA PPO
- For experienced - Hike on the current CTC
Evaluation Process:
- Assessment at AccioJob Pune Skill Centre.
- Company side process: 2 rounds of tech interviews (Virtual +F2F) + 1 HR round
Apply Now: https://links.acciojob.com/4d0Gtd6
Important: Please bring your laptop & earphones for the test.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.



Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Lead Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 9+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.

Should have strong hands on experience of 8-10 yrs in Java Development.
Should have strong knowledge of Java 11+, Spring, Spring Boot, Hibernate, Rest Web Services.
Strong Knowledge of J2EE Design Patterns and Microservices design patterns.
Should have strong hand on knowledge of SQL / PostGres DB. Good to have exposure to Nosql DB.
Should have strong knowldge of AWS services (Lambda, EC2, RDS, API Gateway, S3, Could front, Airflow.
Good to have Python ,PySpark as a secondary Skill
Should have ggod knowledge of CI CD pipleline.
Should be strong in wiriting unit test cases, debug Sonar issues.
Should be able to lead/guide team of junior developers
Should be able to collab with BA and solution architects to create HLD and LLD documents

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a talented Data Engineer to join our team. In this role, you will design, implement, and manage data pipelines, ensuring the accessibility and reliability of data for critical business processes. This is an exciting opportunity to work on scalable solutions that power data-driven decisions
Skillset:
Here is a list of some of the technologies you will work with (the list below is not set in stone)
Data Pipeline Orchestration and Execution:
● AWS Glue
● AWS Step Functions
● Databricks Change
Data Capture:
● Amazon Database Migration Service
● Amazon Managed Streaming for Apache Kafka with Debezium Plugin
Batch:
● AWS step functions (and Glue Jobs)
● Asynchronous queueing of batch job commands with RabbitMQ to various “ETL Jobs”
● Cron and subervisord processing on dedicated job server(s): Python & PHP
Streaming:
● Real-time processing via AWS MSK (Kafka), Apache Hudi, & Apache Flink
● Near real-time processing via worker (listeners) spread over AWS Lambda, custom server (daemons) written in Python and PHP Symfony
● Languages: Python & PySpark, Unix Shell, PHP Symfony (with Doctrine ORM)
● Monitoring & Reliability: Datadog & Cloudwatch
Things you will do:
● Build dashboards using Datadog and Cloudwatch to ensure system health and user support
● Build schema registries that enable data governance
● Partner with end-users to resolve service disruptions and evangelize our data product offerings
● Vigilantly oversee data quality and alert upstream data producers of issues
● Support and contribute to the data platform architecture strategy, roadmap, and implementation plans to support the company’s data-driven initiatives and business objective
● Work with Business Intelligence (BI) consumers to deliver enterprise-wide fact and dimension data product tables to enable data-driven decision-making across the organization.
● Other duties as assigned

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG