50+ AWS (Amazon Web Services) Jobs in Pune | AWS (Amazon Web Services) Job openings in Pune
Apply to 50+ AWS (Amazon Web Services) Jobs in Pune on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
Role Description
This is a full-time on-site role for a Python Full Stack Developer located in Pune. You will be responsible for end-to-end development of scalable, AI-driven web applications. Day-to-day tasks involve architecting asynchronous backend services using Python and FastAPI, building dynamic user interfaces with ReactJS, and managing cloud infrastructure on AWS. You will collaborate with data scientists and product teams to integrate AI models into enterprise solutions while ensuring high performance and reliability.
Key Responsibilities
1. Design and develop high-performance asynchronous APIs using Python and FastAPI.
2. Build responsive, interactive frontends using ReactJS, HTML, CSS, and Tailwind CSS.
3. Implement distributed task queues and caching mechanisms using Celery and Redis.
4. Architect and optimize databases, managing both structured (PostgreSQL) and unstructured (MongoDB) data.
5. Deploy and manage infrastructure on AWS (EC2, Lambda, S3) and maintain CI/CD pipelines for automated deployment.
6. Integrate AI/ML models into production workflows and optimize system performance for scalability.
7. Ensure application security, data privacy, and code quality through best practices and regular testing.
Required Skills & Qualifications
1. 3–5 years of experience in full-stack development with a strong focus on Python.
2. Proficiency in FastAPI and deep understanding of asynchronous programming (asyncio).
3. Solid experience with ReactJS, HTML, CSS, JavaScript, and Tailwind CSS.
4. Hands-on experience with Celery and Redis for background task processing.
5. Working knowledge of AWS services and containerization tools like Docker.
6. Proficiency in database management using PostgreSQL and MongoDB.
7. Experience setting up CI/CD pipelines (Jenkins, GitHub Actions, etc.) and version control (Git).
8. Strong understanding of RESTful API design, microservices, and security best practices.
Job Role: Teamcenter Admin
• Teamcenter and CAD (NX) Configuration Management
• Advanced debugging and root-cause analysis beyond L2
• Code fixes and minor defect remediation
• AWS knowledge, which is foundational to our Teamcenter architecture
• Experience supporting weekend and holiday code deployments
• Operational administration (break/fix, handle ticket escalations, problem management
• Support for project activities
• Deployment and code release support
• Hypercare support following deployment, which is expected to onboard approximately 1,000+ additional users
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Job Location: Kharadi, Pune
Job Type: Full-Time
About Us:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.
Qualifications:
- Strong Experience in JavaScript and React
- Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
- Experience with design patterns
- Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
- Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
- A strong foundation in computer science, with competencies in data structures, algorithms, and software design
- Bachelor's / Master's Degree in CS
- Experience in GIT in mandatory
- Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
Job Description -
Profile: .Net Full Stack Lead
Experience Required: 7–12 Years
Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum
Work Mode: Hybrid
Shift: Normal Shift
Key Responsibilities:
- Design, develop, and deploy scalable microservices using .NET Core and C#
- Build and maintain serverless applications using AWS services (Lambda, SQS, SNS)
- Develop RESTful APIs and integrate them with front-end applications
- Work with both SQL and NoSQL databases to optimize data storage and retrieval
- Implement Entity Framework for efficient database operations and ORM
- Lead technical discussions and provide architectural guidance to the team
- Write clean, maintainable, and testable code following best practices
- Collaborate with cross-functional teams to deliver high-quality solutions
- Participate in code reviews and mentor junior developers
- Troubleshoot and resolve production issues in a timely manner
Required Skills & Qualifications:
- 7–12 years of hands-on experience in .NET development
- Strong proficiency in .NET Framework, .NET Core, and C#
- Proven expertise with AWS services (Lambda, SQS, SNS)
- Solid understanding of SQL and NoSQL databases (SQL Server, MongoDB, DynamoDB, etc.)
- Experience building and deploying Microservices architecture
- Proficiency in Entity Framework or EF Core
- Strong knowledge of RESTful API design and development
- Experience with React or Angular is a good to have
- Understanding of CI/CD pipelines and DevOps practices
- Strong debugging, performance optimization, and problem-solving skills
- Experience with design patterns, SOLID principles, and best coding practices
- Excellent communication and team leadership skills
Job Title: Dot Net Full Stack Lead
Experience Required:7-12 Years
Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum
Job Type: Full-time
About the Role:
We are looking for a skilled .NET Developer with strong AWS cloud experience to join our engineering team. You will be responsible for designing, developing, and maintaining scalable microservices-based applications using .NET technologies and AWS cloud services.
Key Responsibilities:
- Design, develop, and deploy microservices using .NET Core and C#
- Build and maintain serverless applications using AWS Lambda, SQS, and SNS
- Develop RESTful APIs and integrate them with front-end applications
- Work with both SQL and NoSQL databases to optimize data storage and retrieval
- Implement Entity Framework for database operations and ORM
- Write clean, maintainable, and testable code following best practices
- Collaborate with cross-functional teams to deliver high-quality solutions
- Participate in code reviews and contribute to technical documentation
- Troubleshoot and resolve production issues in a timely manner
Mandatory Skills:
- Strong proficiency in .NET Framework and .NET Core
- Expertise in C# programming
- Hands-on experience with AWS services (Lambda, SQS, SNS)
- Solid understanding of SQL and NoSQL databases
- Experience building and deploying Microservices architecture
- Proficiency in Entity Framework or EF Core
- Knowledge of RESTful API design and development
- Understanding of CI/CD pipelines and DevOps practices
Good to Have:
- Experience with React or Angular for full-stack development
- Knowledge of containerization (Docker, Kubernetes)
- Familiarity with other AWS services (EC2, S3, DynamoDB, API Gateway)
- Experience with message queuing and event-driven architecture
- Understanding of SOLID principles and design patterns
- Experience with unit testing and test-driven development (TDD)
Key Responsibilities
- Develop and maintain custom WordPress themes, plugins, and APIs using PHP, MySQL, HTML, CSS, jQuery, and JavaScript.
- Build and optimize REST APIs and integrate with third-party services.
- Ensure high performance, scalability, and security of WordPress applications.
- Collaborate with Product Managers, UI/UX Designers, QA, and DevOps to deliver high-quality solutions.
- Write clean, testable, and maintainable code following best practices.
- Troubleshoot and resolve WordPress-related technical issues.
- Stay updated on WordPress and web technology trends.
Required Skills & Experience
- 7+ years of experience in PHP and WordPress development.
- Strong expertise in custom theme and plugin development.
- Proficiency in JavaScript, jQuery, AJAX, HTML5, and CSS3.
- Solid experience with MySQL and database optimization.
- Hands-on experience with Git and Agile methodologies.
- Knowledge of WordPress security best practices, SEO, and performance tuning.
- Familiarity with CI/CD pipelines, Docker, and cloud platforms (AWS/GCP) is a plus.
- Experience with multisite or headless WordPress is an advantage.
- Experience with Laravel, Symfony, Yii, and other PHP-based frameworks is a plus.
Nice to have
- Cloudflare Workers (Wrangler, KV/R2, Durable Objects)
- Salesforce OAuth/API experience; HubSpot Forms event hooks; middleware patterns.
- AWS basic understanding
- Cloudflare basic understanding
- Uptime/transaction monitoring via Checkly or other automated systems.
- Entry-level DevOps/networking understanding: HTTP/TLS, CORS, DNS, proxies, caching, request/response debugging (HAR).
Qualifications
- Associate or bachelor’s degree preferred (Computer Science, Engineering, etc.), but equivalent work experience in a technology-related area may substitute.
- Proven track record in building and maintaining large-scale WordPress platforms.
Numino Labs
Business: Software product engineering services: Pune, Goa.
Clients: Software product companies in the USA.
Business model: Exclusive teams for working on client products; direct and daily interactions with clients
Client
Silicon Valley startup in genAI: 45m+ in funding.
Product: B2B SaaS.
Core IP: Physics AI foundation model for hardware designers with specific focus on semi-conductor chip design.
Customers: World's top chip manufacturers
Responsibilities
- Team player: Delivers effectively with teams; interpersonal skills, communication skills, risk management skills
- Technical Leadership: Works with ambiguous requirements, designs solutions, independently drives delivery to customers
- Hands on coder: Leverages AI to drive implementation across Reactjs, Python, DB, UnitTest, TestAutomation & Cloud Infra & CI/CD Automation.
Requirements
- Strong computer science fundamentals: data structures & algorithms, networking, RDBMS, and distributed computing
- 8-15 years of experience on Python Stack: Behave, PyTest, Python Generators & async operations, multithreading, context managers, decorators, descriptors
- Python frameworks: FastAPI or Flask or DJango or SQLAlchemy
- Expertise in Microservices, REST/gRPC APIs design, Authentication, Single Sign-on
- Experience in high performance delivering solutions on Cloud
- Some experience in FE: Js & Nextjs/ReactJs
- Some experience in DevOps, Cloud Infra Automation, Test Automation
Proficiency in Java 8+.
Solid understanding of REST APIs(Spring boot), microservices, databases (SQL/NoSQL), and caching systems like Redis/Aerospike.
Familiarity with cloud platforms (AWS, GCP, Azure) and DevOps tools (Docker, Kubernetes, CI/CD).
Good understanding of data structures, algorithms, and software design principles.
What You’ll Do:
We are looking for a Staff Operations Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Engineering Operations team and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams.
- Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
- Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
- Build “mastered” versions of the data for Analytics-specific querying use cases.
- Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent.
- Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
- Implement DataOps practices.
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
- Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
- Operate between Engineers and Analysts to unify both practices for analytics insight creation.
Who You Are:
- 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
- Adept in market research methodologies and using data to deliver representative insights.
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
- Deep SQL experience is a must.
- Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
- Proficient with SQL, Python or JVM-based language, Bash.
- Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
- Experience in debugging UI and Backend issues will be add on.
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Hope you are doing great!
We have an Urgent opening for a Senior Automation QA professional to join a global life sciences data platform company. Immediate interview slots available.
🔹 Quick Role Overview
- Role: Senior Automation QA
- Location: Pune(Hybrid -3 days work from office)
- Employment Type: Full-Time
- Experience Required: 5+ Years
- Interview Process: 2–3 Rounds
- Qualification: B.E / B.Tech
- Notice Period : 0-30 Days
📌 Job Description
IntegriChain is the data and business process platform for life sciences manufacturers, delivering visibility into patient access, affordability, and adherence. The platform enables manufacturers to drive gross-to-net savings, ensure channel integrity, and improve patient outcomes.
We are expanding our Engineering team to strengthen our ability to process large volumes of healthcare and pharmaceutical data at enterprise scale.
The Senior Automation QA will be responsible for ensuring software quality by designing, developing, and maintaining automated test frameworks. This role involves close collaboration with engineering and product teams, ownership of test strategy, mentoring junior QA engineers, and driving best practices to improve product reliability and release efficiency.
🎯 Key Responsibilities
- Hands-on QA across UI, API, and Database testing – both Automation & Manual
- Analyze requirements, user stories, and technical documents to design detailed test cases and test data
- Design, build, execute, and maintain automation scripts using BDD (Gherkin), Pytest, and Playwright
- Own and maintain QA artifacts: Test Strategy, BRD, defect metrics, leakage reports, quality dashboards
- Work with stakeholders to review and improve testing approaches using data-backed quality metrics
- Ensure maximum feasible automation coverage in every sprint
- Perform functional, integration, and regression testing in Agile & DevOps environments
- Drive Shift-left testing, identifying defects early and ensuring faster closures
- Contribute to enhancing automation frameworks with minimal guidance
- Lead and mentor a QA team (up to 5 members)
- Support continuous improvement initiatives and institutionalize QA best practices
- Act as a problem-solver and strong team collaborator in a fast-paced environment
🧩 Desired Skills & Competencies
✅ Must-Have:
- 5+ years of experience in test planning, test case design, test data preparation, automation & manual testing
- 3+ years of strong UI & API automation experience using Playwright with Python
- Solid experience in BDD frameworks (Gherkin, Pytest)
- Strong database testing skills (Postgres / Snowflake / MySQL / RDS)
- Hands-on experience with Git and Jenkins (DevOps exposure)
- Working experience with JMeter
- Experience in Agile methodologies (Scrum / Kanban)
- Excellent problem-solving, analytical, communication, and stakeholder management skills
👍 Good to Have:
- Experience testing AWS / Cloud-hosted applications
- Exposure to ETL processes and BI reporting systems
JOB DESCRIPTION:
Location: Pune, Mumbai
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
Job Title : Java Backend Developer
Experience : 3 – 8 Years
Location : Pune (Onsite) (Pune candidates Only)
Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)
About the Role :
We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.
The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.
Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.
Key Responsibilities :
- Design, develop, and maintain backend microservices and REST APIs
- Implement data persistence using relational and NoSQL databases
- Ensure performance, scalability, and security of backend systems
- Integrate observability and monitoring tools for production environments
- Work within CI/CD pipelines and containerized deployments
- Collaborate with DevOps, QA, and product teams for feature delivery
- Troubleshoot, optimize, and improve existing modules and services
Mandatory Skills :
- Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
- Databases : MySQL, MongoDB
- Observability : Prometheus, Grafana, Spring Actuators
- Cloud Technologies : AWS
- Containerization Tools : Docker
- CI/CD Tools : Jenkins, GitHub Actions
- Version Control : GitHub
- Operating Systems : Windows 7, Linux
Nice to Have :
- Strong analytical and debugging abilities
- Experience working in Agile/Scrum environments
- Good communication and collaborative skills
Job Title: Java Developer (Full-time)
Location: Pune (Onsite)
Experience Required: 3+ Years
Working Days: 5 Days (Mon to Fri)
Key Responsibilities:
- Design, develop, and deploy scalable microservices using Java and Spring Boot.
- Implement RESTful APIs and integrate with external systems and databases.
- Build and manage services on AWS Cloud using components like ECS, Lambda, S3, RDS,
- and API Gateway.
- Collaborate with DevOps to integrate CI/CD pipelines for automated builds, tests, and
- deployments.
- Ensure application performance, reliability, and security in a cloud-native environment.
- Participate in code reviews, troubleshooting, and performance optimization.
- Work closely with architecture, QA, and product teams to deliver high-quality solutions.
Required Skills & Experience:
- Strong proficiency in Java, Spring Boot, and microservice architecture.
- Hands-on experience with AWS Cloud services (ECS, EKS, Lambda, RDS, CloudWatch,etc.).
- Knowledge of Docker, Kubernetes, and CI/CD tools (Jenkins, GitLab, or AWS
- CodePipeline).
- Experience with REST APIs, JSON, and message brokers (Kafka, RabbitMQ, or SNS/SQS).
- Proficiency in SQL and experience with relational databases (Oracle, MySQL, or
- PostgreSQL).
- Familiarity with security best practices, monitoring, and logging in the cloud.
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.
We are looking for Senior Software Engineers responsible for designing, developing, and maintaining large scale distributed ad technology systems. This would entail working on several different systems, platforms and technologies.Collaborate with various engineering teams to meet a range of technological challenges. You will work with our product team to contribute and influence the roadmap of our products and technologies and also influence and inspire team members.
Experience
- 3 - 10 Years
Required Skills
- 3+ years of work experience and a degree in computer science or a similar field
- Knowledgeable about computer science fundamentals including data structures, algorithms, and coding
- Enjoy owning projects from creation to completion and wearing multiple hats
- Product focused mindset
- Experience building distributed systems capable of handling large volumes of traffic
- Fluency with Java, Vertex, Redis, Relational Databases
- Possess good communication skills
- Enjoy working in a team-oriented environment that values excellence
- Have a knack for solving very challenging problems
- (Preferred) Previous experience in advertising technology or gaming apps
- (Preferred) Hands-on experience with Spark, Kafka or similar open-source software
Responsibilities
- Creating design and architecture documents
- Conducting code reviews
- Collaborate with others in the engineering teams to meet a range of technological challenges
- Build, Design and Develop large scale advertising technology system capable of handling tens of billions of events daily
Education
- UG - B.Tech/B.E. - Computers; PG - M.Tech - Computer
What We Offer:
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and inclusive work environment.
Salary budget upto 50 LPA or hike20% on current ctc
you can text me over linkedin for quick response
Role & Responsibilities:
As a Full Stack Developer Intern, you will take on significant responsibilities in the design, development, and maintenance of web applications using Next.js, React.js, Node.js, PostgreSQL, and AWS Cloud services. We seek individuals who are self-motivated, energetic, and capable of delivering high-quality work with minimal supervision.
- Develop user-friendly web applications using Next.js and React.js.
- Create and implement RESTful APIs using Node.js.
- Write high-quality, maintainable code while adhering to best practices in software development.
- Deliver projects on time while maintaining a strong focus on performance and user experience.
- Manage data effectively using PostgreSQL databases.
- Code Quality & Reviews: Maintain code quality standards and conduct regular code reviews to ensure the delivery of high-quality, error-free code.
- Performance Optimization: Identify and troubleshoot performance bottlenecks to ensure a seamless and lightning-fast platform experience.
- Bug Fixing & Maintenance: Monitor platform performance and proactively address any issues or bugs to keep the platform running flawlessly.
- Contribute innovative ideas and solutions during team discussions and brainstorming sessions.
- Communicate openly and honestly with team members, sharing insights and feedback constructively.
- Stay updated on emerging technologies and demonstrate a willingness to learn more.
Qualification:
- Graduate/Post-Graduate with a degree in Computer Science, Software Engineering, or a related field.
- Proficiency in HTML, CSS, JavaScript, and modern front-end frameworks (specifically Next.js and React.js).
- Strong knowledge of back-end technologies such as Node.js and Express.js.
- Experience with relational databases, particularly PostgreSQL.
- Familiarity with AWS Cloud services is a plus.
- Excellent problem-solving skills with a proactive approach to challenges.
- Proven ability to troubleshoot and resolve complex technical issues.
- Strong communication skills with the confidence to share ideas openly.
- High energy level and passion for contributing to the company’s success with integrity and honesty.
- Startup Enthusiast: Embrace the fast-paced and dynamic environment of a startup, driven by a passion for making a positive impact.
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
Job Details
- Job Title: Lead I - Data Engineering
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 6-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
Job Title: Senior Data Engineer (Kafka & AWS)
Responsibilities:
- Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
- Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
- Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
- Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
- Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
- Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
- Uphold data security, governance, and compliance standards across all data operations.
Requirements:
- Minimum of 5 years of experience in Data Engineering or related roles.
- Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
- Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
- Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
- Excellent problem-solving, communication, and collaboration skills.
- Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Must-Haves
Minimum of 5 years of experience in Data Engineering or related roles.
Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
Excellent problem-solving, communication, and collaboration skills.
Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Notice period - 0 to 15days only
Role Type: Technical Leadership | Architecture | Client-Facing
Role Overview
We are seeking a Tech Lead – Data Platform who thinks platform-first, not tool-first. This role sits at the intersection of architecture, delivery, and business impact—owning the design of modern data platforms while guiding teams, influencing stakeholders, and shaping scalable, commercially viable solutions.
You will work closely with engineering teams, business leaders, and senior executives to translate data strategy into resilient, cost-effective, and future-ready platforms.
Key Responsibilities
Data Platform Leadership
- Design and lead modern data platforms leveraging lakehouse, streaming, and governance-first architectures
- Drive platform decisions with a focus on scalability, reliability, security, and cost optimization
- Ensure data platforms are built for analytics, operational use cases, and AI readiness
Solution Architecture & Pre-Sales
- Partner with sales and leadership teams during pre-sales, discovery, and solution shaping
- Whiteboard architectures, qualify opportunities, and recommend right-fit platform approaches
- Convert ambiguous business problems into structured technical solutions and delivery plans
Business Value & Commercial Impact
- Translate platform capabilities into clear business outcomes—revenue growth, operational efficiency, risk reduction, and ROI
- Support land-and-expand strategies, helping grow initial engagements into multi-phase programs
- Balance technical ambition with commercial pragmatism
Cloud & Technology Expertise
- Architect solutions across AWS, Azure, or GCP data ecosystems
- Make informed trade-offs around storage, compute, streaming, orchestration, and governance tooling
- Maintain strong cost-awareness and scaling discipline in platform design
Platform & Practice Building
- Create reference architectures, accelerators, and reusable assets to improve delivery velocity
- Contribute to internal best practices, standards, and technical playbooks
- Support the evolution of data platform offerings and service lines
Executive & Stakeholder Engagement
- Act as a credible technology partner to CIOs, CDOs, and CTOs
- Communicate complex technical concepts clearly to non-technical stakeholders
- Operate confidently in regulated environments (financial services, healthcare, etc.)
Technical Leadership & Mentorship
- Stay hands-on enough to review designs, challenge assumptions, and guide implementation
- Mentor engineers and senior developers; influence hiring and upskilling decisions
- Foster a culture of quality, ownership, and continuous learning
GenAI & Emerging Tech
- Understand and position GenAI and AI/ML as outcomes enabled by strong data platforms
- Avoid AI-first hype; ensure foundational data readiness before advanced use cases
Required Skills & Experience
- 5–9 years of experience in data engineering, data platform architecture, or analytics platforms
- Strong understanding of lakehouse, streaming, metadata, governance, and data security concepts
- Hands-on experience with cloud data stacks on AWS
- Experience working with stakeholders across engineering, business, and leadership
- Exposure to client-facing roles, consulting, or solution design is a strong plus
- Ability to balance technical depth with business and commercial thinking
What We’re Looking For
- Platform thinker, not a tool specialist
- Comfortable with ambiguity and ownership
- Strong communicator with executive presence
- Builder mindset with long-term vision
- Pragmatic, outcome-driven, and commercially aware

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
What You’ll Do:
- Setting up formal data practices for the company.
- Building and running super stable and scalable data architectures.
- Making it easy for folks to add and use new data with self-service pipelines.
- Getting DataOps practices in place.
- Designing, developing, and running data pipelines to help out Products, Analytics, data scientists and machine learning engineers.
- Creating simple, reliable data storage, ingestion, and transformation solutions that are a breeze to deploy and manage.
- Writing and Managing reporting API for different products.
- Implementing different methodologies for different reporting needs.
- Teaming up with all sorts of people – business folks, other software engineers, machine learning engineers, and analysts.
Who You Are:
- Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
- 3.5+ years of experience in building and running data pipelines for tons of data.
- Experience with public clouds like GCP or AWS.
- Experience with Apache open-source projects like Spark, Druid, Airflow, and big data databases like BigQuery, Clickhouse.
- Experience making data architectures that are optimised for both performance and cost.
- Good grasp of software engineering, DataOps, data architecture, Agile, and DevOps.
- Proficient in SQL, Java, Spring Boot, Python, and Bash.
- Good communication skills for working with technical and non-technical people.
- Someone who thinks big, takes chances, innovates, dives deep, gets things done, hires and develops the best, and is always learning and curious.
About Phi Commerce
Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.
Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.
Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.
Awards & Recognitions:
The company innovative work has been recognized at prestigious forums in short span of its existence:
- Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
- Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
- Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
- Winner of NPCI IDEATHON on Blockchain in Payments
- Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors
About the role:
We are seeking an experienced and dynamic QA Manager to lead our quality assurance team in delivering high-quality software products for our organization. The ideal candidate will have a strong background in manual and automation testing, with hands-on experience in SQL, UNIX commands, STLC/SDLC, and managing QA for critical financial systems. You will be responsible for test strategy creation, resource planning, stakeholder communication, and ensuring process adherence to deliver robust and secure systems.
Key Responsibilities:
Team & Test Management
- Lead and manage a team of manual and automation testers, providing guidance, mentorship, and performance feedback.
- Define and execute test strategies and plans for each product release in alignment with business goals and timelines.
- Oversee test case design, execution, and test data management to ensure full coverage across all functionalities.
- Plan and manage QA deliverables in coordination with release and sprint planning.
Process & Quality Oversight
- Ensure compliance with STLC, SDLC, and Defect Management processes.
- Maintain and manage QA environments, ensuring they are up-t-date and aligned with production-like conditions.
- Implement best practices and continuously improve QA processes for efficiency and quality
Stakeholder & Communication Management
- Serve as a primary point of contact for all QA-related updates across internal teams and external partners.
- Provide regular DSR (Daily Status Reports) and WSR (Weekly Status Reports) to stakeholders.
- Communicate effectively with both technical and non-technical stakeholders regarding quality issues, risks, and expectations.
Technical Responsibilities
- Work with SQL for data validation and backend testing.
- Use UNIX commands for system checks, log analysis, and troubleshooting.
- Collaborate closely with developers, product managers, and release engineers to ensure high-quality deliverables.
Required Skills & Experience:
Technical Skills:
- Strong hands-on experience with SQL and UNIX/Linux commands.
- Proficient in manual test case creation and automation testing processes.
- Good understanding of QA tools like JIRA, TestRail, Confluence, and defect tracking systems.
- Knowledge of test automation frameworks and scripting languages (optional but a plus).
Domain Expertise:
- Solid understanding of payment systems, including ATM, E-commerce transactions, settlement, and reconciliation workflows.
- Experience in testing APIs, transaction flows, chargebacks, refunds, and financial reporting systems.
Leadership & Soft Skills:
- Proven experience in leading QA teams and managing test resources effectively.
- Strong analytical and problem-solving skills to identify root causes of defects and quality issues.
- Excellent communication and interpersonal skills for effective collaboration across teams and stakeholders.
Qualifications:
- 10+ years of total QA experience with at least 2 years in a QA leadership/ managerial role.
- Experience in fintech, banking, or payment processing environments is strongly preferred
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Amazon SageMaker, AWS Cloud Infrastructure (S3, EC2, Lambda), Docker and Kubernetes (EKS, ECS), SQL, AWS data (Redshift, Glue)
Skills : Machine Learning, MLOps, AWS Cloud, Redshift OR Glue, Kubernetes, Sage maker
******
Notice period - 0 to 15 days only
Location : Pune & Hyderabad only
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Work Location: Pune
Job Type: Hybrid
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Review Criteria
- Strong Senior Data Engineer profile
- 4+ years of hands-on Data Engineering experience
- Must have experience owning end-to-end data architecture and complex pipelines
- Must have advanced SQL capability (complex queries, large datasets, optimization)
- Must have strong Databricks hands-on experience
- Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Must have Power BI integration experience
- CTC has 80% fixed and 20% variable in their ctc structure
Preferred
- Worked on Call center data, understand nuances of data generated in call centers
- Experience implementing data governance, quality checks, or lineage frameworks
- Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture
Job Specific Criteria
- CV Attachment is mandatory
- Are you Comfortable integrating with Power BI datasets?
- We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role & Responsibilities
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
Ideal Candidate
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
ROLES AND RESPONSIBILITIES:
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
IDEAL CANDIDATE:
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
Company Overview
McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.
Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era
Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.
Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.
At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:
- Precision prospecting
- Intent-based targeting
- Data enrichment from 16+ premium sources
- AI-driven workflows to book more meetings, faster
We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.
EXPERIENCE
Duties you'll be entrusted with:
- Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
- Writing efficient, reusable, testable, and scalable code.
- Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of applications and enhancing the functionalities of current software systems.
- Keeping abreast with the latest technology and trends.
Expectations from you:
Basic Requirements
- Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
- Experience with Cloud platforms (AWS, Azure, GCP).
- Strong understanding of monitoring, logging, and observability practices.
- Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
- Expertise in designing, implementing, and optimizing Elasticsearch.
- Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
- Expertise in Event driven architecture.
- Experience in Integrating Generative AI APIs.
- Working experience in high user concurrency.
- Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,
Technical Skills
- Demonstrable experience in web application development with expertise in Node.js or Nest.js.
- Knowledge of database technologies and agile development methodologies.
- Experience working with databases, such as MySQL or MongoDB.
- Familiarity with web development frameworks, such as Express.js.
- Understanding of microservices architecture and DevOps principles.
- Well-versed with AWS and serverless architecture.
Soft Skills
- A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
- Potential to apply innovative and exciting ideas, concepts, and technologies.
- Stay up-to-date with the latest design trends, animation techniques, and software advancements.
- Multi-tasking and time-management skills, with the ability to prioritize tasks.
THRIVE
Some of the extensive benefits of being part of our team:
- We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
- The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
- The McKinley Cares Program has a wide range of benefits:
- The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
- In-house benefits have a referral bonus window and sponsored social functions.
- An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum.
- Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
- In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
- We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.
Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)
Location: Pune
Type: Full-Time
Who We Are:
After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.
If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!
About The Role:
As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.
What You’ll Do:
* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.
* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL
* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team
* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.
* Follow Agile sprint methodology for development.
* Conduct code reviews to maintain code quality and adherence to best practices.
* Optimize API performance for optimal user experiences.
* Participate in the entire development lifecycle, from initial planning , design and maintenance
* Troubleshoot and debug issues to ensure system stability.
* Collaborate with QA teams to ensure high quality releases.
* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.
Requirements
* Bachelor's degree in Computer Science, software Engineering, or a related field.
* 5+ years of hands-on experience in backend development using Node.js and TypeScript.
* Experience working on Postgres or My SQL.
* Proficiency in TypeScript and its application in Node.js
* Experience with ORM such as Prisma or TypeORM.
* Familiarity with Agile development methodologies.
* Strong analytical and problem solving skills.
* Ability to work independently and in a team oriented, fast-paced environment.
* Excellent written and oral communication skills.
* Self motivated and proactive attitude.
Preferred:
* Experience with other backend technologies and languages.
* Familiarity with continuous integration and deployment process.
* Contributions to open-source projects related to backend development.
Note: please don't apply if you're profile if you're primary database is postgres SQL.
Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.
Leapfrog is on a mission to be a role model technology company. Since 2010, we have relentlessly worked on crafting better digital products with our team of superior engineers. We’re a full-stack company specializing in SaaS products and have served over 100 clients with our mastery of emerging technologies.
We’re thinkers and doers, creatives and coders, makers and builders— but most importantly, we are trusted partners with world-class engineers. Hundreds of companies in Boston, Seattle, Silicon Valley, and San Francisco choose us to gain speed, agility, quality, and stability, giving them an edge over their competitors.
We are seeking a highly skilled Salesforce Developer to enhance our customer engagement capabilities by upgrading our Legacy Chat to Enhanced Chat. The ideal candidate will have hands-on experience with Salesforce Service Cloud and Sales Cloud, coupled with a strong understanding of Omni-Channel and Live Agent functionalities.
This role requires proven expertise in Apex, Lightning Web Components (LWC), JavaScript, HTML, SOQL, and SOSL, with the ability to design and implement scalable, high-quality Salesforce solutions that drive customer success.
Essential Duties & Responsibilities
- Focus on delivering high-quality, functional solutions on the Salesforce.com platform using Apex, Lightning Web Components (LWC), SOAP, and REST APIs.
- Lead the migration from Legacy Chat to Enhanced Chat, ensuring a seamless transition for users and customers within Service Cloud and Sales Cloud.
- Design and implement Omni-Channel and Omni-Flow configurations to optimize customer service workflows and routing.
- Perform deployment, testing, and documentation of Salesforce features, enhancements, and integrations.
- Collaborate closely with product owners, engineering teams, and business stakeholders to define, clarify, and implement both functional and non-functional requirements for new and existing backlog items.
- Train and support end-users on implemented Salesforce features and planned solutions to ensure adoption and efficiency.
- Investigate, scope, and plan the implementation of assigned epics and backlog items, leveraging deep Salesforce platform expertise to model, document, and justify scalable, maintainable solutions.
Desired Outcomes
- Lead the migration from Legacy Chat to Enhanced Chat within Service Cloud and Sales Cloud, ensuring a seamless, scalable, and user-friendly transition for both customers and internal teams.
- Design, build, and deploy Enhanced Chat configurations, including Omni-Channel and Omni-Flow setups, to optimize response times, routing efficiency, and overall customer engagement.
- Execute deployment, testing, and documentation of Salesforce features, enhancements, and integrations, maintaining high standards of quality, performance, and compliance with best practices.
About you
- Minimum 5 years of hands-on experience with coding on the Salesforce Platform using Apex, Visualforce, Lightning / Aura Components, Javascript, HTML, REST/SOAP API etc.
- Minimum 2 years of hands-on experience creating Flows
- Minimum 2 years of experience with Omnichannel and Live Agent Chat
- Minimum 2 years of experience with Sales, Service Cloud
Required Education / Certificates / Experience
- Bachelor of Science or equivalent preferably in Computer Science / Computer Engineering / Electrical Engineering
- Salesforce Platform Developer I certification
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
MUST-HAVES:
- Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
- Notice period - 0 to 15 days only
- Hybrid work mode- 3 days office, 2 days at home
SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS
ADDITIONAL GUIDELINES:
- Interview process: - 2 Technical round + 1 Client round
- 3 days in office, Hybrid model.
CORE RESPONSIBILITIES:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
SKILLS:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
REQUIRED EXPERIENCE:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.
"IoT Migration Architect (Azure to AWS)" – Role 1
Salary between 28LPA -33 LPA -Fixed
We have Other Positions as well in IOT.
- IoT Solutions Engineer - Role 2
- IoT Architect – 8+ Yrs - Role -3
Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.
Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,
Experience in Large Scale IoT Deployment.
Contract to Hire role.
Location – Pune/Hyderabad/Chennai/ Bangalore
Work Mode -2-3 days Hybrid from Office in week.
Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.
How much notice period we can consider.
15-25 Days( Not more than that)
Client Company – One of Leading Technology Consulting
Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.
Highlights of this role.
• It’s a long term role.
• High Possibility of conversion within 6 Months or After 6 months ( if you perform well).
• Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.
Point to be remember.
1. You should have valid experience cum relieving letters of your all past employer.
2. Must have available to join within 15 days’ time.
3. Must be ready to work 2-3 days from Client Office in a week.
4. Must have PF service history of last 4 years in Continuation
What we offer During the role.
- Competitive Salary
- Flexible working hours and hybrid work mode.
- Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.
How to Apply.
- Pls fill the given below summary sheet.
- Pls provide UAN Service history
- Latest Photo.
IoT Migration Architect (Azure to AWS) - Job Description
Job Title: IoT Migration Architect (Azure to AWS)
Experience Range: 10+ Years
Role Summary
The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.
Required Technical Skills & Qualifications
10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.
Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).
Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).
Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.
Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).
Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).
Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).
Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).
Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.
Your full Name ( Full Name means full name) –
Contact NO –
Alternate Contact No-
Email ID –
Alternate Email ID-
Total Experience –
Experience in IoT –
Experience in AWS IoT-
Experience in Azure IoT –
Experience in Kubernetes –
Experience in Docker –
Experience in EKS-
Do you have valid passport –
Current CTC –
Expected CTC –
What is your notice period in your current Company-
Are you currently working or not-
If not working then when you have left your last company –
Current location –
Preferred Location –
It’s a Contract to Hire role, Are you ok with that-
Highest Qualification –
Current Employer (Payroll Company Name)
Previous Employer (Payroll Company Name)-
2nd Previous Employer (Payroll Company Name) –
3rd Previous Employer (Payroll Company Name)-
Are you holding any Offer –
Are you Expecting any offer -
Are you open to consider Contract to Hire role , It’s a C2H Role-
PF Deduction is happening in Current Company –
PF Deduction happened in 2nd last Employer-
PF Deduction happened in 3 last Employer –
Latest Photo –
UAN Service History -
Shantpriya Chandra
Director & Head of Recruitment.
Harel Consulting India Pvt Ltd
https://www.linkedin.com/in/shantpriya/
www.harel-consulting.com
🚀 We’re Hiring: React + Node.js Developer (Full Stack)
📍 Location: Pune
💼 Experience: 5–8 years
🕒 Notice Period: Immediate to 15 days
About the Role:
We’re looking for a skilled Full Stack Developer with hands-on experience in React and Node.js, and a passion for building scalable, high-performance applications.
Key Skills & Responsibilities:
Strong expertise in React (frontend) and Node.js (backend).
Experience with relational databases (PostgreSQL / MySQL).
Familiarity with production systems and cloud services (AWS / GCP).
Strong grasp of OOP / FP and clean coding principles (e.g., SOLID).
Hands-on with Docker, and good to have exposure to Kubernetes, RabbitMQ, Redis.
Experience or interest in AI APIs & tools is a plus.
Excellent communication and collaboration skills.
Bonus: Contributions to open-source projects.
Role: DevOps Engineer
Experience: 2–3+ years
Location: Pune
Work Mode: Hybrid (3 days Work from office)
Mandatory Skills:
- Strong hands-on experience with CI/CD tools like Jenkins, GitHub Actions, or AWS CodePipeline
- Proficiency in scripting languages (Bash, Python, PowerShell)
- Hands-on experience with containerization (Docker) and container management
- Proven experience managing infrastructure (On-premise or AWS/VMware)
- Experience with version control systems (Git/Bitbucket/GitHub)
- Familiarity with monitoring and logging tools for system performance tracking
- Knowledge of security best practices and compliance standards
- Bachelor's degree in Computer Science, Engineering, or related field
- Willingness to support production issues during odd hours when required
Preferred Qualifications:
- Certifications in AWS, Docker, or VMware
- Experience with configuration management tools like Ansible
- Exposure to Agile and DevOps methodologies
- Hands-on experience with Virtual Machines and Container orchestration
Job Details
- Job Title: ML Engineer II - Aws, Aws Cloud
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 6-12 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description:
Core Responsibilities:
? The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
? Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
? Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
? Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
? System Integration: Integrate models into existing systems and workflows.
? Model Deployment: Deploy models to production environments and monitor performance.
? Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
? Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
? Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
? Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
? Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
? Knowledge of model monitoring and performance evaluation.
Required experience:
? Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
? AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in
ML workflows
? AWS data: Redshift, Glue
? Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Aws, Aws Cloud, Amazon Redshift, Eks
NP: Immediate – 30 Days
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.













