50+ AWS (Amazon Web Services) Jobs in Pune | AWS (Amazon Web Services) Job openings in Pune
Apply to 50+ AWS (Amazon Web Services) Jobs in Pune on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.


One of the reputed Client in India

Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.

About Nirmitee.io
Nirmitee.io is a fast-growing product engineering and IT services company building world-class products across healthcare and fintech. We believe in engineering excellence, innovation, and long-term impact. As a Tech Lead, you’ll be at the core of this journey — driving execution, building scalable systems, and mentoring a strong team.
- What You’ll DoLead by Example:
Be a hands-on contributor — writing clean, scalable, and production-grade code.
Review pull requests, set coding standards, and push for technical excellence.
- Own Delivery:
Take end-to-end ownership of sprints, architecture, code quality, and deployment.
Collaborate with PMs and founders to scope, plan, and execute product features.
- Build & Scale Teams:
Mentor and coach engineers to grow technically and professionally.
Foster a culture of accountability, transparency, and continuous improvement.
- Drive Process & Best Practices:
Implement strong CI/CD pipelines, testing strategies, and release processes.
Ensure predictable and high-quality delivery across multiple projects.
- Architect & Innovate:
Make key technical decisions, evaluate new technologies, and design system architecture for scale and performance.
Help shape the engineering roadmap with future-proof solutions.
- What We’re Looking For10+ years of experience in software development with at least 2 years in a lead/mentorship role.
- Strong hands-on expertise in MERN Stack or Python/Django/Flask, or GoLang/Java/Rust.
- Experience with AWS / Azure / GCP, containerization (Docker/Kubernetes), and CI/CD pipelines.
- Deep understanding of system design, architecture patterns, and performance optimization.
- Proven track record of shipping products in fast-paced environments.
- Strong communication, leadership, and ownership mindset.
- Believes in delivering what is committed, has a show-up attitude
- Nice to HaveExposure to healthcare or fintech domains.
- Experience in building scalable SaaS platforms or open-source contributions.
- Knowledge of security, compliance, and observability best practices.
- Why Nirmitee.ioWork directly with the founding team and shape product direction.
- Flat hierarchy — your ideas will matter.
- Ownership, speed, and impact.
- Opportunity to build something historical.

Job Title: Mid-Level .NET Developer (Agile/SCRUM)
Location: Mohali, PTP or anywhere else)
Night Shift from 6:30 pm to 3:30 am IST
Experience:
5 Years
Job Summary:
We are seeking a proactive and detail-oriented Mid-Level .NET Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality applications using Microsoft technologies with a strong emphasis on .NET Core, C#, Web API, and modern front-end frameworks. You will collaborate with cross-functional teams in an Agile/SCRUM environment and participate in the full software development lifecycle—from requirements gathering to deployment—while ensuring adherence to best coding and delivery practices.
Key Responsibilities:
- Design, develop, and maintain applications using C#, .NET, .NET Core, MVC, and databases such as SQL Server, PostgreSQL, and MongoDB.
- Create responsive and interactive user interfaces using JavaScript, TypeScript, Angular, HTML, and CSS.
- Develop and integrate RESTful APIs for multi-tier, distributed systems.
- Participate actively in Agile/SCRUM ceremonies, including sprint planning, daily stand-ups, and retrospectives.
- Write clean, efficient, and maintainable code following industry best practices.
- Conduct code reviews to ensure high-quality and consistent deliverables.
- Assist in configuring and maintaining CI/CD pipelines (Jenkins or similar tools).
- Troubleshoot, debug, and resolve application issues effectively.
- Collaborate with QA and product teams to validate requirements and ensure smooth delivery.
- Support release planning and deployment activities.
Required Skills & Qualifications:
- 4–6 years of professional experience in .NET development.
- Strong proficiency in C#, .NET Core, MVC, and relational databases such as SQL Server.
- Working knowledge of NoSQL databases like MongoDB.
- Solid understanding of JavaScript/TypeScript and the Angular framework.
- Experience in developing and integrating RESTful APIs.
- Familiarity with Agile/SCRUM methodologies.
- Basic knowledge of CI/CD pipelines and Git version control.
- Hands-on experience with AWS cloud services.
- Strong analytical, problem-solving, and debugging skills.
- Excellent communication and collaboration skills.
Preferred / Nice-to-Have Skills:
- Advanced experience with AWS services.
- Knowledge of Kubernetes or other container orchestration platforms.
- Familiarity with IIS web server configuration and management.
- Experience in the healthcare domain.
- Exposure to AI-assisted code development tools (e.g., GitHub Copilot, ChatGPT).
- Experience with application security and code quality tools such as Snyk or SonarQube.
- Strong understanding of SOLID principles and clean architecture patterns.
Technical Proficiencies:
- ASP.NET Core, ASP.NET MVC
- C#, Entity Framework, Razor Pages
- SQL Server, MongoDB
- REST API, jQuery, AJAX
- HTML, CSS, JavaScript, TypeScript, Angular
- Azure Services, Azure Functions, AWS
- Visual Studio
- CI/CD, Git

Job Description: Lead Full-Stack Developer
Location: Pune Shift timings: 2-11 p.m. (Hybrid working model)
Job Overview: We are looking for two highly skilled Lead Developers based in India to join U.S.-driven Agile teams as key contributors. These roles will be pivotal in driving the end-to-end technical delivery of essential features across customer-facing and associate-facing platforms. This position requires a hands-on engineering approach, with at least half of the time dedicated to coding and core technical execution, while also leading and supporting the broader team in achieving project goals.
Primary Responsibilities:
Act as the senior technical lead within a globally distributed team, focusing on domains such as Loyalty, Mobile Applications, CRM, and Customer Pickup solutions.
Take ownership of design, development, testing, and deployment of cloud-native microservices and modular web components.
Guide and oversee code reviews, ensure adherence to coding guidelines, and promote engineering best practices.
Collaborate closely with U.S.-based product owners, architects, and delivery managers to define requirements, decompose epics, refine user stories, and estimate effort.
Mentor and coach junior developers in India, fostering technical excellence within a remote-first, collaborative work culture.
Provide transparent updates on risks, blockers, architectural choices, and delivery progress across international teams.
Deliver high-quality, maintainable, and testable code following scalability, observability, and resiliency standards.
Build and maintain integrations with Salesforce, internal/external APIs, SQL/NoSQL databases, and customer data platforms.
Required Skills & Experience:
Minimum 10 years of progressive software development experience with proven leadership in technical delivery.
Strong expertise in Java, Spring Boot, REST APIs, SQL (PostgreSQL, SQL Server), and front-end frameworks such as React or Angular.
Hands-on experience working in Agile teams, partnering directly with U.S.-based product and engineering stakeholders.
Advanced skills in debugging, performance optimization, and unit/integration test automation.
Proficiency with CI/CD pipelines, Git-based source control, containerization (Docker), and modern deployment strategies.
Exceptional communication skills with a proactive and solution-oriented approach in a distributed team environment.
Prior exposure to offshore-onshore collaboration models is highly desirable.

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.


Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
We’re Hiring | Full Stack Developer – Golang
Are you a skilled Full Stack Developer with hands-on experience in Golang?
Join our growing team and work on cutting-edge projects in a collaborative environment!
🔹 Role: Full Stack Developer – Golang
🔹 Experience: 5+ Years
🔹 Budget: ₹1,00,000 per month
🔹 Location: Yerwada, Pune (Hybrid – 2 days a week)
Key Responsibilities:
Develop and maintain scalable web applications using Golang for backend and modern frontend frameworks.
Design clean, efficient, and reusable code for both client and server sides.
Collaborate with cross-functional teams to define and deliver new features.
Ensure application performance, security, and scalability.
Troubleshoot and debug production issues effectively.
Requirements:
Strong proficiency in Golang (backend) and JavaScript/TypeScript (frontend).
Experience with React.js, RESTful APIs, and microservices architecture.
Familiarity with Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure) is a plus.
Excellent problem-solving and communication skills.
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)


Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.


Job Position: Senior Technical Lead / Architect
Desired Skills: Python, Django, Flask, MySQL, PostgreSQL, Amazon Web Services, JavaScript, Identity Security, IGA, OAuth
Experience Range: 6 – 8 Years
Type: Full Time
Location: Pune, India
Job Description:
Tech Prescient is looking for an experienced and proven Technical Lead (Python/Django/Flask/FastAPI, React, and AWS/Azure Cloud) who has worked across the modern full stack to deliver scalable, secure software products and solutions. The ideal candidate should have experience leading from the front — handling customer interactions, mentoring teams, owning technical delivery, and ensuring the highest quality standards.
Key Responsibilities:
- Lead the end-to-end design and development of applications using the Python stack (Django, Flask, FastAPI).
- Architect and implement secure, scalable, and cloud-native solutions on AWS or Azure.
- Drive technical discussions, architecture reviews, and ensure adherence to design and code quality standards.
- Work closely with customers to translate business requirements into robust technical solutions.
- Oversee development teams, manage delivery timelines, and guide sprint execution.
- Design and implement microservices-based architectures and serverless deployments.
- Build and integrate RESTful APIs and backend services; experience with Django Rest Framework (DRF) is a plus.
- Responsible for infrastructure planning, deployment, and automation on AWS (ECS, Lambda, EC2, S3, RDS, CloudFormation, etc.).
- Collaborate with cross-functional teams to ensure seamless delivery and continuous improvement.
- Champion best practices in software security, CI/CD, and DevOps.
- Provide technical mentorship to developers and lead project communications with clients and internal stakeholders.
Identity & Security Expertise:
- Strong understanding of Identity and Access Management (IAM) principles and best practices.
- Experience in implementing Identity Governance and Administration (IGA) solutions for user lifecycle management, access provisioning, and compliance.
- Hands-on experience with OAuth 2.0, OpenID Connect, SAML, and related identity protocols for securing APIs and services.
- Experience integrating authentication and authorization mechanisms within web and cloud applications.
- Familiarity with Single Sign-On (SSO), MFA, and role-based access control (RBAC).
- Exposure to AWS IAM, Cognito, or other cloud-based identity providers.
- Ability to assess and enhance application security posture, ensuring compliance with enterprise identity and security standards.
Skills and Experience:
- 6 – 8 years of hands-on experience in software design, development, and delivery.
- Strong foundation in Python and related frameworks (Django, Flask, FastAPI).
- Experience designing secure, scalable microservices and API architectures.
- Good understanding of relational databases (MySQL, PostgreSQL).
- Proven leadership, communication, and customer engagement skills.
- Knowledge of Kubernetes is an added advantage.
- Excellent problem-solving skills and passion for learning new technologies.

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and platforms.
In this role, you’ll:
- Work closely with clients and internal teams to deliver robust data solutions powering analytics, AI/ML, and operational systems.
- Mentor junior engineers and bring engineering discipline into our data engagements.
Key Responsibilities
- Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.
- Implement scalable data models, warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with stakeholders to translate business requirements into technical solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills
- Strong programming skills in Python and advanced SQL expertise.
- Deep understanding of ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Hands-on with distributed data processing frameworks (Apache Spark, Flink, or similar).
- Experience with orchestration tools like Airflow (or similar).
- Familiarity with CI/CD pipelines and Git.
- Ability to debug, optimize, and scale data pipelines in production.
Good to Have
- Experience with cloud platforms (AWS preferred; GCP/Azure also welcome).
- Exposure to Databricks, dbt, or similar platforms.
- Understanding of data governance, quality frameworks, and observability.
- Certifications (e.g., AWS Data Analytics, Solutions Architect, or Databricks).
Other Expectations
- Comfortable working in fast-paced, client-facing environments.
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements, as needed.


Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Job Description
The ideal candidate will possess expertise in Core Java (at least Java 8), Spring framework, JDBC, threading, database management, and cloud platforms such as Azure and GCP. The candidate should also have strong debugging skills, the ability to understand multi-service flow, experience with large data processing, and excellent problem-solving abilities.
JD:
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.

Job Title : React + Node.js Developer (Full Stack)
Experience : 5+ Years
Location : Mumbai or Pune (Final location to be decided post-interview)
Notice Period : Immediate to 15 Days
Interview Rounds : 1 Internal Round + 1 Client Round
Job Summary :
We are looking for a highly skilled Full Stack Developer (React + Node.js) with strong expertise in both frontend and backend development.
The ideal candidate should demonstrate hands-on experience with databases, excellent project understanding, and the ability to deliver scalable, high-performance applications in production environments.
Mandatory Skills :
React.js, Node.js, PostgreSQL/MySQL, JavaScript (ES6+), Docker, AWS/GCP, full-stack development, production system experience, and strong project understanding with hands-on database expertise.
Key Responsibilities :
- Design, develop, and deploy robust full-stack applications using React (frontend) and Node.js (backend).
- Exhibit a deep understanding of database design, optimization, and integration using PostgreSQL or MySQL.
- Translate project requirements into efficient, maintainable, and scalable technical solutions.
- Build clean, modular, and reusable components following SOLID principles and industry best practices.
- Manage backend services, APIs, and data-driven functionalities for large-scale applications.
- Work closely with product and engineering teams to ensure smooth end-to-end project delivery.
- Use Docker and cloud platforms (AWS/GCP) for containerization, deployment, and scaling of services.
- Participate in design discussions, code reviews, and troubleshooting production issues.
Required Skills :
- 5+ Years of hands-on experience in full-stack development using React and Node.js.
- Strong understanding and hands-on expertise with relational databases (PostgreSQL/MySQL).
- Solid grasp of JavaScript (ES6+), and proficiency in Object-Oriented Programming (OOP) or Functional Programming (FP).
- Proven experience working with production-grade systems and scalable architectures.
- Proficiency with Docker, API development, and cloud services (preferably AWS or GCP).
- Excellent project understanding, problem-solving ability, and strong communication skills (verbal and written).
Good to Have :
- Experience in Golang or Elixir for backend development.
- Knowledge of Kubernetes, Redis, RabbitMQ, or similar distributed tools.
- Exposure to AI APIs and tools.
- Contributions to open-source projects.


We're seeking an AI/ML Engineer to join our team-
As an AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real world business problems. You will work closely with cross-functional teams, including data scientists, software engineers, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms, data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
- Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
- AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
- Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
- Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
- Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behavior, and performance metrics
- Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
- Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
- Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
- Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference
Requirements
- Bachelor's, Master's or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
- Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
- Proficiency in programming languages commonly used for AI/ML. Preferably Python
- Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
- Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
- Strong understanding of machine learning algorithms, statistics, and data structures
- Experience with data preprocessing, data wrangling, and feature engineering
- Knowledge of deep learning architectures, neural networks, and transfer learning
- Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
- Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
- Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
- Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Who we are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data. For more information visit, www.DeepIntent.com.
Who you are:
- 5+ years of software engineering experience with 2+ years in senior technical roles
- Proven track record designing and implementing large-scale, distributed backend systems
- Experience leading technical initiatives across multiple teams
- Strong background in mentoring engineers and driving technical excellence
- Programming Languages: Expert-level proficiency in Java and Spring Boot framework
- Framework Expertise: Deep experience with Spring ecosystem (Spring Boot, Spring Security, Spring Data, Spring Cloud)
- API Development: Strong experience building RESTfuI APIs, GraphQL endpoints, and micro-services architectures
- Cloud Platforms: Advanced knowledge of AWS, GCP, Azure and cloud-native development patterns
- Databases: Proficiency with both SQL (PostgreSQL, MySQL, Oracle) and NoSQL (MongoDB, Redis, Cassandra) databases, including design and optimization
- Bachelor's or Master's degree in Computer Science, Engineering, Software Engineering, or related field (or equivalent industry experience)
- Excellent technical communication skills for both technical and non-technical stakeholders
- Strong mentorship abilities with experience coaching junior and mid—level engineers
- Proven ability to drive consensus on technical decisions across teams
- Comfortable with ambiguous problems and breaking down complex challenges
What You'll Do:
- Lead design and implementation of complex backend systems and micro-services serving multiple product teams
- Drive architectural decisions ensuring scalability, reliability, and performance
- Create technical design documents, system architecture diagrams, and API specifications
- Champion engineering best practices including code quality, testing strategies, and security
- Partner with Tech Leads, Engineering Managers, and Product Managers to align solutions with business objectives
- Lead technical initiatives requiring coordination between backend, frontend, and data teams
- Participate in architecture review boards and provide guidance for organisation-wide initiatives
- Serve as technical consultant for complex system design problems across product areas
- Mentor and coach engineers at various levels with technical guidance and career development
- Conduct code reviews and design reviews, sharing knowledge and raising technical standards
- Lead technical discussions and knowledge-sharing sessions
- Help establish coding standards and engineering processes
- Design and develop robust, scalable backend services and APIs using Java and Spring Boot
- Implement comprehensive testing strategies and optimise application performance
- Ensure security best practices across all applications
- Research and prototype new approaches to improve system architecture and developer productivity
Job Title: PySpark/Scala Developer
Functional Skills: Experience in Credit Risk/Regulatory risk domain
Technical Skills: Spark ,PySpark, Python, Hive, Scala, MapReduce, Unix shell scripting
Good to Have Skills: Exposure to Machine Learning Techniques
Job Description:
5+ Years of experience with Developing/Fine tuning and implementing programs/applications
Using Python/PySpark/Scala on Big Data/Hadoop Platform.
Roles and Responsibilities:
a) Work with a Leading Bank’s Risk Management team on specific projects/requirements pertaining to risk Models in
consumer and wholesale banking
b) Enhance Machine Learning Models using PySpark or Scala
c) Work with Data Scientists to Build ML Models based on Business Requirements and Follow ML Cycle to Deploy them all
the way to Production Environment
d) Participate Feature Engineering, Training Models, Scoring and retraining
e) Architect Data Pipeline and Automate Data Ingestion and Model Jobs
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
About the Role
We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.
Key Responsibilities
- Design and implement end-to-end Generative AI solutions for customers on AWS.
- Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
- Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
- Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
- Provide technical guidance and mentorship to junior data scientists and engineers.
- Develop reusable components and reference architectures to accelerate delivery.
- Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.
Required Skills & Experience
- 4–8 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
- Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
- Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
- Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
- Strong problem-solving skills and ability to work independently in customer-facing roles.
- Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.
Soft Skills
- Strong verbal and written communication for effective customer engagement.
- Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
- Team-oriented with a proactive attitude and strong ownership mindset.
What We Offer
- Opportunity to work on cutting-edge Generative AI projects across industries.
- Collaborative, startup-like work environment with flexibility and ownership.
- Exposure to full-stack AI/ML project lifecycle and client-facing roles.
- Competitive compensation and learning opportunities in the AWS AI ecosystem.
About Oneture Technologies
Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.
Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.
We are looking for a highly skilled Solution Architect with a passion for software engineering and deep experience in backend technologies, cloud, and DevOps. This role will be central in managing, designing, and delivering large-scale, scalable solutions.
Core Skills
- Strong coding and software engineering fundamentals.
- Experience in large-scale custom-built applications and platforms.
- Champion of SOLID principles, OO design, and pair programming.
- Agile, Lean, and Continuous Delivery – CI, TDD, BDD.
- Frontend experience is a plus.
- Hands-on with Java, Scala, Golang, Rust, Spark, Python, and JS frameworks.
- Experience with Docker, Kubernetes, and Infrastructure as Code.
- Excellent understanding of cloud technologies – AWS, GCP, Azure.
Responsibilities
- Own all aspects of technical development and delivery.
- Understand project requirements and create architecture documentation.
- Ensure adherence to development best practices through code reviews.
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).

About the Role
We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.
Responsibilities
- Build and maintain scalable features across the frontend and backend.
- Work with tech stacks like Node.js, React.js, Vue.js, and others.
- Contribute to system design, architecture, and code quality enforcement.
- Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
- Collaborate in code reviews, performance optimizations, and product iterations.
Required Skills
- 4–6 years of hands-on fullstack development experience.
- Strong command over JavaScript, Node.js, and React.js.
- Solid understanding of REST APIs and/or GraphQL.
- Good grasp of OOP principles, TDD, and writing clean, maintainable code.
- Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
- Familiarity with HTML, CSS, and frontend performance optimization.
Good to Have
- Exposure to Docker, AWS, Kubernetes, or Terraform.
- Experience in other backend languages or frameworks.
- Experience with microservices and scalable system architectures.

We’re hiring a Full Stack Developer (5+ years, Pune location) to join our growing team!
You’ll be working with React.js, Node.js, JavaScript, APIs, and cloud deployments to build scalable and high-performing web applications.
Responsibilities include developing responsive apps, building RESTful APIs, working with SQL/NoSQL databases, and deploying apps on AWS/Docker.
Experience with CI/CD, Git, secure coding practices (OAuth/JWT), and Agile collaboration is a must.
If you’re passionate about full stack development and want to work on impactful projects, we’d love to connect!
Profile: AWS Data Engineer
Mandate skills :AWS + Databricks + Pyspark + SQL role
Location: Bangalore/Pune/Hyderabad/Chennai/Gurgaon:
Notice Period: Immediate
Key Requirements :
- Design, build, and maintain scalable data pipelines to collect, process, and store from multiple datasets.
- Optimize data storage solutions for better performance, scalability, and cost-efficiency.
- Develop and manage ETL/ELT processes to transform data as per schema definitions, apply slicing and dicing, and make it available for downstream jobs and other teams.
- Collaborate closely with cross-functional teams to understand system and product functionalities, pace up feature development, and capture evolving data requirements.
- Engage with stakeholders to gather requirements and create curated datasets for downstream consumption and end-user reporting.
- Automate deployment and CI/CD processes using GitHub workflows, identifying areas to reduce manual, repetitive work.
- Ensure compliance with data governance policies, privacy regulations, and security protocols.
- Utilize cloud platforms like AWS and work on Databricks for data processing with S3 Storage.
- Work with distributed systems and big data technologies such as Spark, SQL, and Delta Lake.
- Integrate with SFTP to push data securely from Databricks to remote locations.
- Analyze and interpret spark query execution plans to fine-tune queries for faster and more efficient processing.
- Strong problem-solving and troubleshooting skills in large-scale distributed systems.


Key Responsibilities
- Develop and maintain custom WordPress themes, plugins, and APIs using PHP, MySQL, HTML, CSS, jQuery, and JavaScript.
- Build and optimize REST APIs and integrate with third-party services.
- Ensure high performance, scalability, and security of WordPress applications.
- Collaborate with Product Managers, UI/UX Designers, QA, and DevOps to deliver high-quality solutions.
- Write clean, testable, and maintainable code following best practices.
- Troubleshoot and resolve WordPress-related technical issues.
- Stay updated on WordPress and web technology trends.
Required Skills & Experience
- 7+ years of experience in PHP and WordPress development.
- Strong expertise in custom theme and plugin development.
- Proficiency in JavaScript, jQuery, AJAX, HTML5, and CSS3.
- Solid experience with MySQL and database optimization.
- Hands-on experience with Git and Agile methodologies.
- Knowledge of WordPress security best practices, SEO, and performance tuning.
- Familiarity with CI/CD pipelines, Docker, and cloud platforms (AWS/GCP) is a plus.
- Experience with multisite or headless WordPress is an advantage.
- Experience with Laravel, Symfony, Yii, and other PHP-based frameworks is a plus.
Nice to have
- Cloudflare Workers (Wrangler, KV/R2, Durable Objects)
- Salesforce OAuth/API experience; HubSpot Forms event hooks; middleware patterns.
- AWS basic understanding
- Cloudflare basic understanding
- Uptime/transaction monitoring via Checkly or other automated systems.
- Entry-level DevOps/networking understanding: HTTP/TLS, CORS, DNS, proxies, caching, request/response debugging (HAR).
Qualifications
- Associate or bachelor’s degree preferred (Computer Science, Engineering, etc.), but equivalent work experience in a technology-related area may substitute.
- Proven track record in building and maintaining large-scale WordPress platforms.

• Data Pipeline Development: Design and implement scalable data pipelines using PySpark and Databricks on AWS cloud infrastructure
• ETL/ELT Operations: Extract, transform, and load data from various sources using Python, SQL, and PySpark for batch and streaming data processing
• Databricks Platform Management: Develop and maintain data workflows, notebooks, and clusters in Databricks environment for efficient data processing
• AWS Cloud Services: Utilize AWS services including S3, Glue, EMR, Redshift, Kinesis, and Lambda for comprehensive data solutions
• Data Transformation: Write efficient PySpark scripts and SQL queries to process large-scale datasets and implement complex business logic
• Data Quality & Monitoring: Implement data validation, quality checks, and monitoring solutions to ensure data integrity across pipelines
• Collaboration: Work closely with data scientists, analysts, and other engineering teams to support analytics and machine learning initiatives
• Performance Optimization: Monitor and optimize data pipeline performance, query efficiency, and resource utilization in Databricks and AWS environments
Required Qualifications:
• Experience: 3+ years of hands-on experience in data engineering, ETL development, or related field
• PySpark Expertise: Strong proficiency in PySpark for large-scale data processing and transformations
• Python Programming: Solid Python programming skills with experience in data manipulation libraries (pandas etc)
• SQL Proficiency: Advanced SQL skills including complex queries, window functions, and performance optimization
• Databricks Experience: Hands-on experience with Databricks platform, including notebook development, cluster management, and job scheduling
• AWS Cloud Services: Working knowledge of core AWS services (S3, Glue, EMR, Redshift, IAM, Lambda)
• Data Modeling: Understanding of dimensional modeling, data warehousing concepts, and ETL best practices
• Version Control: Experience with Git and collaborative development workflows
Preferred Qualifications:
• Education: Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field
• Advanced AWS: Experience with additional AWS services like Athena, QuickSight, Step Functions, and CloudWatch
• Data Formats: Experience working with various data formats (JSON, Parquet, Avro, Delta Lake)
• Containerization: Basic knowledge of Docker and container orchestration
• Agile Methodology: Experience working in Agile/Scrum development environments
• Business Intelligence Tools: Exposure to BI tools like Tableau, Power BI, or Databricks SQL Analytics
Technical Skills Summary:
Core Technologies:
- PySpark & Spark SQL
- Python (pandas, boto3)
- SQL (PostgreSQL, MySQL, Redshift)
- Databricks (notebooks, clusters, jobs, Delta Lake)
AWS Services:
- S3, Glue, EMR, Redshift
- Lambda, Athena
- IAM, CloudWatch
Development Tools:
- Git/GitHub
- CI/CD pipelines, Docker
- Linux/Unix command line


Job Description: Python Developer
Location: Pune Employment Type: Full-Time Experience: 0.6-1+ year
Role Overview
We are looking for a skilled Python Developer with 0.6-1+ years of experience to join our team. The ideal candidate should have hands-on experience in Python, REST APIs, Flask, and databases. You will be responsible for designing, developing, and maintaining scalable backend services.
Key Responsibilities
- Develop, test, and maintain high-quality Python applications.
- Design and build RESTful APIs using Flask.
- Integrate APIs with front-end and third-party services.
- Work with relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis).
- Optimize performance and troubleshoot issues in backend applications.
- Collaborate with cross-functional teams to define and implement new features.
- Follow best practices for code quality, security, and performance optimization.
Required Skills
- Strong proficiency in Python (0.6-1+ years).
- Experience with Flask (or FastAPI/Django).
- Hands-on experience with REST API development.
- Proficiency in working with databases (SQL & NoSQL).
- Familiarity with Git, Docker, and CI/CD pipelines is a plus.
Preferred Qualifications
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Experience working in Agile/Scrum environments.
- Ability to write clean, scalable, and well-documented code.

Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.

7+ years of experience in Python Development
Good experience in Microservices and APIs development.
Must have exposure to large scale data
Good to have Gen AI experience
Code versioning and collaboration. (Git)
Knowledge for Libraries for extracting data from websites.
Knowledge of SQL and NoSQL databases
Familiarity with RESTful APIs
Familiarity with Cloud (Azure /AWS) technologies
About Wissen Technology:
• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
• Globally present with offices US, India, UK, Australia, Mexico, and Canada.
• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
• Wissen Technology has been certified as a Great Place to Work®.
• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com

Job Title : Software Development Engineer (Python, Django & FastAPI + React.js)
Experience : 2+ Years
Location : Nagpur / Remote (India)
Job Type : Full Time
Collaboration Hours : 11:00 AM – 7:00 PM IST
About the Role :
We are seeking a Software Development Engineer to join our growing team. The ideal candidate will have strong expertise in backend development with Python, Django, and FastAPI, as well as working knowledge of AWS.
While backend development is the primary focus, you should also be comfortable contributing to frontend development using JavaScript, TypeScript, and React.
Mandatory Skills : Python, Django, FastAPI, AWS, JavaScript/TypeScript, React, REST APIs, SQL/NoSQL.
Key Responsibilities :
- Design, develop, and maintain backend services using Python (Django / FastAPI).
- Deploy, scale, and manage applications on AWS cloud services.
- Collaborate with frontend developers and contribute to React (JS/TS) development when required.
- Write clean, efficient, and maintainable code following best practices.
- Ensure system performance, scalability, and security.
- Participate in the full software development lifecycle : planning, design, development, testing, and deployment.
- Work collaboratively with cross-functional teams to deliver high-quality solutions.
Requirements :
- Bachelor’s degree in Computer Science, Computer Engineering, or related field.
- 2+ years of professional software development experience.
- Strong proficiency in Python, with hands-on experience in Django and FastAPI.
- Practical experience with AWS cloud services.
- Basic proficiency in JavaScript, TypeScript, and React for frontend development.
- Solid understanding of REST APIs, databases (SQL/NoSQL), and software design principles.
- Familiarity with Git and collaborative workflows.
- Strong problem-solving ability and adaptability in a fast-paced environment.
Good to Have :
- Experience with Docker for containerization.
- Knowledge of CI/CD pipelines and DevOps practices.

About the Role
We are looking for a highly skilled Machine Learning Lead with proven expertise in demand forecasting to join our team. The ideal candidate will have 4-8 years of experience building and deploying ML models, strong knowledge of AWS ML services and MLOps practices, and the ability to lead a team while working directly with clients. This is a client-facing role that requires strong communication skills, technical depth, and leadership ability.
Key Responsibilities
- Lead end-to-end design, development, and deployment of demand forecasting models.
- Collaborate with clients to gather requirements, define KPIs, and translate business needs into ML solutions.
- Architect and implement ML workflows using AWS ML ecosystem (SageMaker, Bedrock, Lambda, S3, Step Functions, etc.).
- Establish and enforce MLOps best practices for scalable, reproducible, and automated model deployment and monitoring.
- Mentor and guide a team of ML engineers and data scientists, ensuring technical excellence and timely delivery.
- Partner with cross-functional teams (engineering, data, business) to integrate forecasting insights into client systems.
- Present results, methodologies, and recommendations to both technical and non-technical stakeholders.
- Stay updated with the latest advancements in forecasting algorithms, time-series modeling, and AWS ML offerings.
Required Skills & Experience
- 4-8 years of experience in machine learning with a strong focus on time-series forecasting and demand prediction.
- Hands-on experience with AWS ML stack (Amazon SageMaker, Step Functions, Lambda, S3, Athena, CloudWatch, etc.).
- Strong understanding of MLOps pipelines (CI/CD for ML, model monitoring, retraining workflows).
- Proficiency in Python, SQL, and ML libraries (TensorFlow, PyTorch, Scikit-learn, Prophet, GluonTS, etc.).
- Experience working directly with clients and stakeholders to understand business requirements and deliver ML solutions.
- Strong leadership and team management skills, with the ability to mentor and guide junior team members.
- Excellent communication and presentation skills for both technical and business audiences.
Preferred Qualifications
- Experience with retail, FMCG, or supply chain demand forecasting use cases.
- Exposure to generative AI and LLMs for augmenting forecasting solutions.
- AWS Certification (e.g., AWS Certified Machine Learning – Specialty).
What We Offer
- Opportunity to lead impactful demand forecasting projects with global clients.
- Exposure to cutting-edge ML, AI, and AWS technologies.
- Collaborative, fast-paced, and growth-oriented environment.
- Competitive compensation and benefits.
1 Senior Associate Technology L1 – Java Microservices
Company Description
Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Job Description
We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.
We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.
Your Impact:
• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.
• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business
• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.
Qualifications
➢ 5 to 7 Years of software development experience
➢ Strong development skills in Java JDK 1.8 or above
➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts
➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure
➢ Database RDBMS/No SQL (SQL, Joins, Indexing)
➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)
➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)
➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)
➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)
➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of
➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.
➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.
➢ Good communication skills and ability to work with global teams to define and deliver on projects.
➢ Sound understanding/experience in software development process, test-driven development.
➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine
➢ Experience in Microservices

Role Overview
We're looking for experienced Data Engineers who can independently design, build, and manage scalable data platforms. You'll work directly with clients and internal teams to develop robust data pipelines that support analytics, AI/ML, and operational systems.
You’ll also play a mentorship role and help establish strong engineering practices across our data projects.
Key Responsibilities
- Design and develop large-scale, distributed data pipelines (batch and streaming)
- Implement scalable data models, warehouses/lakehouses, and data lakes
- Translate business requirements into technical data solutions
- Optimize data pipelines for performance and reliability
- Ensure code is clean, modular, tested, and documented
- Contribute to architecture, tooling decisions, and platform setup
- Review code/design and mentor junior engineers
Must-Have Skills
- Strong programming skills in Python and advanced SQL
- Solid grasp of ETL/ELT, data modeling (OLTP & OLAP), and stream processing
- Hands-on experience with frameworks like Apache Spark, Flink, etc.
- Experience with orchestration tools like Airflow
- Familiarity with CI/CD pipelines and Git
- Ability to debug and scale data pipelines in production
Preferred Skills
- Experience with cloud platforms (AWS preferred, GCP or Azure also fine)
- Exposure to Databricks, dbt, or similar tools
- Understanding of data governance, quality frameworks, and observability
- Certifications (e.g., AWS Data Analytics, Solutions Architect, Databricks) are a bonus
What We’re Looking For
- Problem-solver with strong analytical skills and attention to detail
- Fast learner who can adapt across tools, tech stacks, and domains
- Comfortable working in fast-paced, client-facing environments
- Willingness to travel within India when required
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
Role & Responsibilities :
- Lead the design, analysis, and implementation of technical solutions.
- Take full ownership of product features.
- Participate in detailed discussions with the product management team regarding requirements.
- Work closely with the engineering team to design and implement scalable solutions.
- Create detailed functional and technical specifications.
- Follow Test-Driven Development (TDD) and deliver high-quality code.
- Communicate proactively with your manager regarding risks and progress.
- Mentor junior team members and provide technical guidance.
- Troubleshoot and resolve production issues with RCA and long-term solutions
Required Skills & Experience :
- Bachelors/Masters degree in Computer Science or related field with a solid academic track record.
- 6+ years of hands-on experience in backend development for large-scale enterprise products.
- Strong programming skills in Java; familiarity with Python is a plus.
- Deep understanding of data structures, algorithms, and problem-solving.
- Proficient in Spring Boot and RESTful APIs.
- Experience with cloud technologies like ElasticSearch, Kafka, MongoDB, Hazelcast, Ceph, etc.
- Strong experience in building scalable, concurrent applications.
- Exposure to Service-Oriented Architecture (SOA) and Test-Driven Development (TDD).
- Excellent communication and collaboration skills.
Preferred Technologies :
- Java
- Spring Boot, J2EE
- ElasticSearch
- Kafka
- MongoDB, Ceph
- AWS
- Storm, Hazelcast
- TDD, SOA
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you

Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location- Pune/ Chennai
Job Type- Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.


Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type:Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Responsibilities:
● Design and build scalable APIs and microservices in Node.js (or equivalent backend frameworks).
● Develop and optimize high-performance systems handling large-scale data and concurrent users.
● Ensure system security, reliability, and fault tolerance.
● Collaborate closely with product managers, designers, and frontend engineers for seamless delivery.
● Write clean, maintainable, and well-documented code with a focus on best practices.
● Contribute to architectural decisions, technology choices, and overall system design.
● Monitor, debug, and continuously improve backend performance.
● Stay updated with modern backend technologies and bring innovation into the product.
Desired Qualifications & Skillset:
● 2+ years of professional backend development experience.
● Proficiency with Node.js, Express.js, or similar frameworks.
● Strong knowledge of web application architecture, databases (SQL/NoSQL), and caching strategies.
● Experience with cloud platforms (AWS/GCP/Azure), CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus.
● Ability to break down complex problems into scalable solutions.
● Strong logical aptitude, quick learning ability, and a proactive mindset

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


Full Stack Mobile Application Developer
No of positions - 2
Strictly in-office position
Seniority - 5-8 years
About Wednesday
Wednesday is an engineering services company specializing in Data Engineering, applied AI, and Product Engineering. We partner with ambitious companies to solve their most pressing engineering challenges.
Job Description
We seek a highly skilled Fullstack Mobile Developer who is passionate about crafting exceptional mobile experiences and robust backend systems. This role demands a deep understanding of React Native, Node.js, and modern cloud ecosystems like AWS, combined with a commitment to best practices and continuous improvement.
As part of our team, you will work closely with cross-functional teams to design, build, and maintain high-performance mobile applications and scalable backend solutions for our clients. The ideal candidate is a team player with a growth mindset, a passion for excellence, and the ability to energise and inspire those around them.
Key Responsibilities
Mobile Development
- Design, develop, and maintain high-quality, scalable mobile applications using React Native.
- Implement responsive UI/UX designs that deliver seamless user experiences across devices.
- Leverage modern development techniques and tools to ensure robust and maintainable codebases.
Backend Engineering
- Build and maintain scalable backend systems using Python or Node.js and cloud technologies like AWS.
- Design and manage relational (SQL) and non-relational (NoSQL) databases to support application functionality.
- Develop RESTful APIs and integrations to power mobile applications and services.
Programming Best Practices
- Use programming best practices, including code reviews, automated testing, and documentation.
Collaboration and Communication
- Work with other engineers to deliver on client expectations and project goals.
Continuous Learning and Mentorship
- Demonstrate a commitment to learning new tools, techniques, and technologies to stay at the forefront of mobile and backend development.
Qualifications
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 5-8 years of professional experience as a Fullstack Developer.
- Proven expertise in building and scaling mobile applications and backend systems in a services environment.
- Strong proficiency in Python or Node.js, AWS, and database technologies (SQL or NoSQL).
- Knowledge of software engineering best practices, including clean code principles and test-driven development.
- Familiarity with AI copilots and a willingness to incorporate AI-driven tools into workflows.
- Excellent communication and collaboration skills, with the ability to inspire and influence teammates.
- Demonstrated growth mindset, passion for learning, and a commitment to excellence.

Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana:
Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

🚀 We’re Hiring: Senior Python Backend Developer 🚀
📍 Location: Baner, Pune (Work from Office)
💰 Compensation: ₹6 LPA
🕑 Experience Required: Minimum 2 years as a Python Backend Developer
About Us
Foto Owl AI is a fast-growing product-based company headquartered in Baner, Pune.
We specialize in:
⚡ Hyper-personalized fan engagement
🤖 AI-powered real-time photo sharing
📸 Advanced media asset management
What You’ll Do
As a Senior Python Backend Developer, you’ll play a key role in designing, building, and deploying scalable backend systems that power our cutting-edge platforms.
Architect and develop complex, secure, and scalable backend services
Build and maintain APIs & data pipelines for web, mobile, and AI-driven platforms
Optimize SQL & NoSQL databases for high performance
Manage AWS infrastructure (EC2, S3, RDS, Lambda, CloudWatch, etc.)
Implement observability, monitoring, and security best practices
Collaborate cross-functionally with product & AI teams
Mentor junior developers and conduct code reviews
Troubleshoot and resolve production issues with efficiency
What We’re Looking For
✅ Strong expertise in Python backend development
✅ Solid knowledge of Data Structures & Algorithms
✅ Hands-on experience with SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis, etc.)
✅ Proficiency in RESTful APIs & Microservice design
✅ Knowledge of Docker, Kubernetes, and cloud-native systems
✅ Experience managing AWS-based deployments
Why Join Us?
At Foto Owl AI, you’ll be part of a passionate team building world-class media tech products used in sports, events, and fan engagement platforms. If you love scalable backend systems, real-time challenges, and AI-driven products, this is the place for you.

Job Description
We are seeking a highly skilled Sr. Fullstack Mobile Developer with a passion for crafting exceptional mobile experiences and robust backend systems. This role demands a deep understanding of React Native, Node.js, and modern cloud ecosystems like AWS, combined with a commitment to best practices and continuous improvement.
As part of our team, you will work closely with cross-functional teams to design, build, and maintain high-performance mobile applications and scalable backend solutions for our clients. The ideal candidate is a team player with a growth mindset, a passion for excellence, and the ability to energize and inspire those around them.
Requirements
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 4-6 years of professional experience as a Fullstack Developer, with at least 3 years in mobile development using React Native.
- Proven expertise in building and scaling mobile applications and backend systems in a services environment.
- Strong proficiency in Python or Node.js, AWS, and database technologies (SQL and NoSQL).
- Experience implementing CI/CD pipelines and automation tools.
- Deep knowledge of software engineering best practices, including clean code principles and test-driven development.
- Familiarity with AI copilots and a willingness to incorporate AI-driven tools into workflows.
- Excellent communication and collaboration skills, with the ability to inspire and influence teammates.
- Demonstrated growth mindset, passion for learning, and a commitment to excellence.
- A perfectionist attitude with an eye for detail and a drive to deliver outstanding results.
- Prior experience working with a services company is a must.
Benefits
- Creative Freedom: A culture that empowers you to innovate and take bold product decisions for client projects.
- Comprehensive Healthcare: Extensive health coverage for you and your family.
- Tailored Growth Plans: Personalized professional development programs to help you achieve your career aspirations.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment