50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Job Title: Senior Full-stack Developer (Python,React)
Location: Hyderabad, India (On-site Only)
Employment Type: Full-Time
Work Mode: Office-Based; Remote or Hybrid Not Allowed
Role Summary
We are looking for a skilled Senior Fullstack Developer with expertise in Django (Python),React, RESTful APIs, GraphQL, microservices architecture, Redis, and AWS services (SNS, SQS, etc.). The ideal candidate will be responsible for designing, developing, and maintaining scalable backend systems and APIs to support dynamic frontend applications and services.
Required Skillset:
l 9+ years of professional experience writing production-grade software, including experience leading the design of complex systems.
l Strong expertise in Python (Django or equivalent frameworks) and REST API development.
l Solid exp of frontend frameworks such as React and TypeScript.
l Strong understanding of relational databases (MySQL or PostgreSQL preferred).
l Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes).
l Hands-on experience with cloud infrastructure (AWS preferred)
l Proven experience debugging complex production issues and improving observability.
Preferred Skillset:
l Experience in enterprise SaaS or B2B systems with multi-tenancy, authentication (OAuth, SSO, SAML), and data partitioning. Exposure to Kafka or RabbitMQ, microservices.
l Knowledge of event-driven architecture, A/B testing frameworks, and analytics pipelines.
l Familiarity with accessibility standards and best practices Agile/Scrum methodologies.
l Exposure to the Open edX ecosystem or open-source contributions in education tech.
l Demonstrated history of technical mentorship, team leadership, or cross-team collaboration.
Tech Stack:
l Backend: Python (Django), (Celery,Redis Asynchronous workflows), REST APIs
l Frontend: React, TypeScript, SCSS
l Data: MySQL, Snowflake, Elasticsearch
l DevOps/Cloud: Docker,Kubernetes,GitHub Actions,AWS
l Monitoring: Datadog
l Collaboration Tools: GitHub, Jira, Slack, Segment
Primary Responsibilities:
l Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.
l Architect, design, and deliver secure, scalable, and high-performing web applications.
l Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.
l Ensure compliance with coding standards, architectural patterns, and established development best practices.
l Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.
l Perform detailed code reviews, optimize system performance, and resolve production-level issues.
l Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.
l Create and maintain technical documentation while providing regular updates to leadership and stakeholders.

A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.
Key Responsibilities:
- Design and develop backend components and sub-systems for large-scale platforms under guidance from senior engineers.
- Contribute to building and evolving the next-generation customer data platform.
- Write clean, efficient, and well-tested code with a focus on scalability and performance.
- Explore and experiment with modern technologies—especially open-source frameworks—
- and build small prototypes or proof-of-concepts.
- Use AI-assisted development tools to accelerate coding, testing, debugging, and learning while adhering to engineering best practices.
- Participate in code reviews, design discussions, and continuous improvement of the platform.
Qualifications:
- 0–2 years of experience (or strong academic/project background) in backend development with Java.
- Good fundamentals in algorithms, data structures, and basic performance optimizations.
- Bachelor’s or Master’s degree in Computer Science or IT (B.E / B.Tech / M.Tech / M.S) from premier institutes.
Technical Skill Set:
- Strong aptitude and analytical skills with emphasis on problem solving and clean coding.
- Working knowledge of SQL and NoSQL databases.
- Familiarity with unit testing frameworks and writing testable code is a plus.
- Basic understanding of distributed systems, messaging, or streaming platforms is a bonus.
AI-Assisted Engineering (LLM-Era Skills):
- Familiarity with modern AI coding tools such as Cursor, Claude Code, Codex, Windsurf, Opencode, or similar.
- Ability to use AI tools for code generation, refactoring, test creation, and learning new systems responsibly.
- Willingness to learn how to combine human judgment with AI assistance for high-quality engineering outcomes.
Soft Skills & Nice to Have
- Appreciation for technology and its ability to create real business value, especially in data and marketing platforms.
- Clear written and verbal communication skills.
- Strong ownership mindset and ability to execute in fast-paced environments.
- Prior internship or startup experience is a plus.
Job Summary
We are looking for a highly skilled Senior Java/Kotlin Developer with strong experience in Microservices Architecture and AWS Cloud. The ideal candidate should have hands-on expertise in designing, developing, and deploying scalable microservices-based applications using Java/Kotlin and AWS services.
Key Responsibilities
- Design and develop scalable, secure, and high-performance microservices using Java and/or Kotlin
- Build RESTful APIs using frameworks like Spring Boot / Spring Cloud
- Develop and deploy cloud-native applications on AWS
- Implement containerized applications using Docker and orchestrate using Kubernetes / EKS
- Work with messaging systems like Kafka / SQS
- Implement CI/CD pipelines using tools like Jenkins / GitHub Actions
- Ensure best practices in system design, code quality, testing, and security
- Collaborate with cross-functional teams (DevOps, QA, Product)
- Participate in code reviews and mentor junior developers
Required Skills
- 5+ years of strong experience in Java development
- Hands-on experience in Kotlin
- Strong knowledge of Microservices Architecture
- Experience with Spring Boot, Spring MVC, Spring Security
- Strong experience in AWS services such as:
- EC2
- S3
- RDS
- Lambda
- ECS/EKS
- API Gateway
- SQS/SNS
- Experience with Docker & Kubernetes
- Strong understanding of REST APIs and distributed systems
- Experience with relational databases (MySQL/PostgreSQL) and NoSQL (MongoDB/DynamoDB)
- Good understanding of design patterns and clean architecture
- Experience in Agile/Scrum methodology
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
Build AI Systems That Change How Industries Operate
Tailored AI is not just another tech company. We’re building the McKinsey of AI systems a new kind of firm, made up of engineers who understand business deeply and use AI as a force multiplier.
As an SDE 2, you’ll lead and own the engineering for an entire product track, often working directly with clients and stakeholders. You’ll be the architect, the executor, and the problem-solver-in-chief. You’ll take vague problem statements, turn them into elegant solutions, and bring them to life in production.
What You’ll Do
- Architect and build AI-powered software solutions from scratch
- Own a full engineering track—backend, infra, integrations, and LLM workflows
- Interface with customers to align on specs, iterate fast, and deploy with confidence
- Mentor SDE 1s and Interns, conduct code reviews, and guide engineering quality
- Stay on top of AI trends, contribute to internal tooling and shared best practices
What You’ll Gain
- Leadership opportunties and fast progression to Senior SDE roles
- Deep knowledge of how AI is transforming industries while actually building it
- High ownership, zero bureaucracy, and direct influence on product direction
- Exposure to multi-agent AI systems, enterprise integrations, and scalable infra
Who You Are
- 2–3 years of strong backend engineering experience
- Proven track record of owning software modules and delivering in production
- Skilled in Python, Django/FastAPI, Postgres, AWS
- Exposure to system design and performance optimization
- Interest in AI tools like Langchain, OpenAI, vector DBs, etc.
- Strong analytical and communication skills
Tech Stack You’ll Work With
- Python, Django, FastAPI
- Postgres, Redis, S3
- EC2, Lambda, Cloudwatch
- Langchain, LLM APIs, Vector DBs
- REST APIs, Microservices, GitHub Actions
Some Real Problems You Might Work On
- Building a multi-agent career coaching assistant that guides users and automates job hunting
- Deploying a chatbot that generates employee performance reviews on-demand from HR data
- Designing an LLM pipeline to help Indian lawyers access precedents, statutes, and case law in seconds
Interview Process
- Screening – A quick call with a Co-Founder to align on fit
- CV + Puzzle + Programming – 1 hour round to gauge problem-solving and fundamentals
- Live Coding – Solve a coding task using Python + docs
- System Design – For SDE 2, a take-home problem and a detailed discussion round
Job Title: Java Full Stack Developer
Location: Bangalore
Experience: 3–9 Years
Employment Type: Full-Time
Role Overview
We are looking for an experienced Java Full Stack Developer with strong backend expertise in Java and frontend experience in modern UI frameworks. The ideal candidate should be capable of designing scalable backend services and developing responsive user interfaces.
Key Responsibilities
- Develop and maintain scalable applications using Java (8/11/17)
- Build RESTful APIs using Spring Boot / Spring MVC
- Design and develop frontend applications using Angular / React / Vue.js
- Work with relational databases like MySQL / PostgreSQL / Oracle
- Implement Microservices architecture
- Integrate third-party APIs and external systems
- Write unit and integration test cases
- Participate in code reviews and Agile ceremonies
- Ensure application performance, security, and scalability
- Work with CI/CD pipelines for deployment
Required Skills
Backend:
- Strong knowledge of Core Java
- Spring Boot, Spring MVC, Spring Security
- REST API development
- Microservices architecture
- Hibernate / JPA
- SQL & Database design
Frontend:
- Angular (8+) or React.js
- HTML5, CSS3, JavaScript, TypeScript
- Bootstrap / Material UI
DevOps & Tools:
- Git
- Maven / Gradle
- Jenkins / Azure DevOps
- Docker (Good to have)
- Kubernetes (Preferred)
Preferred Skills
- Cloud experience (AWS / Azure / GCP)
- Kafka / Messaging systems
- Redis / Caching mechanisms
- Agile/Scrum methodology
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using Azure DevOps
- Manage cloud infrastructure on Microsoft Azure including VMs, App Services, AKS, Networking, and Storage
- Implement Infrastructure as Code (IaC) using Terraform, ARM Templates, or Bicep
- Build and manage containerized environments using Docker and Kubernetes
- Deploy and manage Azure Kubernetes Service (AKS) clusters
- Automate configuration management and deployments
- Implement monitoring and logging solutions using Azure Monitor, Log Analytics, and Application Insights
- Integrate security best practices (DevSecOps) within CI/CD pipelines
- Collaborate with development teams to improve build, release, and deployment processes
- Troubleshoot production issues and optimize system performance
- Ensure high availability, scalability, and disaster recovery strategies
Required Skills & Qualifications
- 7+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with Microsoft Azure
- Expertise in CI/CD implementation using Azure DevOps
- Experience with scripting languages such as PowerShell, Bash, or Python
- Proficiency in Infrastructure as Code (Terraform, ARM, Bicep)
- Experience with container orchestration (Kubernetes/AKS)
- Knowledge of Git-based version control systems
- Experience with configuration management tools
- Strong understanding of networking, security, and cloud architecture
- Experience working in Agile/Scrum environments
Job Title: Java Backend Developer
Experience: ~3-6 years (Mid-to-Senior)
Employment Type: Full-time, Permanent
Location : Bangalore
Role Overview
As a Java Backend Developer, you’ll be responsible for designing, developing, and maintaining scalable backend systems and microservices. You’ll work with cross-functional teams to build high-performance distributed services, APIs, and data-driven applications that power business solutions.
Key Responsibilities
- Design and implement microservices and backend components using Java (8+) and Spring Boot.
- Build and consume RESTful APIs and integrate with internal/external services.
- Work with event-driven systems and messaging using Apache Kafka (producers/consumers).
- Develop and optimize databases, including SQL (e.g., MySQL/PostgreSQL) and NoSQL (e.g., MongoDB/Cassandra).
- Participate in CI/CD pipelines, automated builds, and deployments using tools like Git, Maven, Jenkins.
- Ensure code quality through unit and integration testing, documentation, and code reviews.
- Collaborate with frontend developers, QA, DevOps, and product teams following Agile methodologies.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- Proven hands-on experience with Core Java and Spring Boot development.
- Strong understanding of microservices architecture, REST APIs, and distributed systems.
- Experience with message queues/event streaming (Apache Kafka).
- Skilled in relational and NoSQL databases and writing optimized queries.
- Comfortable with CI/CD tools (e.g., Git, Maven, Jenkins) and version control.
- Good problem-solving, debugging, and collaboration skills.
Preferred / Nice-to-Have
- Cloud platform experience (AWS / Azure / GCP).
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of performance tuning, caching strategies, observability (metrics/logging).
- Agile/Scrum development experience.
8+ Years in the industry and 5 Years in NodeJS.
Proficient in NodeJS along with frameworks like ExpressJS, SailsJS, HapiJS etc.
Proficient in any of the given RDBMS such as mysql, mariadb, postgres.
Proficient in any of the given NOSQL such as Mongo DB, Dynamo DB
Proficient in writing Restful APIs Proficient in JS with ES NEXT
Standards Proficient in Version Control Systems based on Git.
Must know Branching strategies.
Working knowledge of cloud platforms such as AWS or GCP.
Experience with containerization using Docker.
Hands-on exposure to Kubernetes for container orchestration (deployment, scaling, health checks).
Understanding of CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, or similar.
Experience with Infrastructure as Code tools like Terraform or CloudFormation (good to have)
Role Context & Importance
At ARDEM, uninterrupted connectivity is mission-critical. Our remote teams across India process millions of pages and records annually using ARDEM Cloud Platforms, AWS WorkSpaces, and enterprise tools. The Network Engineer plays a central role in ensuring zero downtime for our processing operations, maintaining secure remote access for hundreds of team members, and supporting the cloud and on-premises infrastructure that underpins every client engagement.
Key Responsibilities
1. Remote Desktop & End-User Support
• Provide prompt remote desktop support to ARDEM’s distributed workforce, resolving hardware, software, and network-related issues via AnyDesk and other remote tools.
• Diagnose and resolve connectivity issues affecting access to ARDEM Cloud Platforms and AWS WorkSpaces.
• Support onboarding and configuration of workstations (Windows 14” FHD laptops, minimum i5/8GB RAM) per ARDEM standard specifications.
• Ensure minimum 100 Mbps internet connectivity compliance for remote staff and assist with ISP-related escalations.
2. Identity & Access Management
• Manage and maintain user identities, access policies, and lifecycle operations using Azure Entra ID (Azure AD), Active Directory, and Microsoft 365 Admin Center.
• Configure role-based access controls (RBAC), group policies, and conditional access to protect client data in line with SOC 2 and ISO 27001 requirements.
• Manage Microsoft 365 services including Exchange Online, Teams, SharePoint, and OneDrive for ARDEM’s internal and remote teams.
3. AWS Cloud Services Administration
• Configure, monitor, and support AWS services critical to ARDEM’s cloud operations: EC2, S3, IAM, AWS WorkSpaces, and VPC.
• Manage AWS IAM policies, user roles, and security groups to ensure least-privilege access across cloud environments.
• Monitor cloud resource utilisation, performance metrics, and costs; generate reports and recommend optimisations.
• Support cloud-based remote desktop (AWS WorkSpaces) used by ARDEM’s BPO processing teams.
4. Network Infrastructure & Cisco Hardware
• Configure, manage, and troubleshoot Cisco switches, routers, and firewalls at ARDEM’s processing centres.
• Manage DNS, DHCP, VPN, and VLAN configurations to support secure and high-availability operations.
• Monitor network performance and bandwidth; implement QoS policies to prioritise critical BPO workloads.
• Coordinate with ISPs and hardware vendors to resolve infrastructure issues with minimal service disruption.
5. On-Premises Server Administration
• Maintain Windows Server infrastructure including file servers, application hosting servers, and internal email servers.
• Administer DNS, DHCP, Group Policy, and Active Directory Domain Services (AD DS) across on-premises environments.
• Perform routine health checks, patch management, and capacity planning for on-prem systems.
6. Security, Backup & Disaster Recovery
• Implement and maintain data backup schedules and disaster recovery (DR) procedures in line with ARDEM’s data security policies.
• Support compliance with ARDEM’s ISO 27001-aligned, SOC 2, HIPAA, and GDPR security frameworks through network-level controls.
• Manage VPNs, SSL certificates, endpoint security tools, and encryption at rest/in-transit for all ARDEM platforms.
• Respond to and document security incidents; participate in periodic security audits and remediation activities.
7. Documentation & Knowledge Management
• Create and maintain clear, accurate technical documentation: network diagrams, SOPs, runbooks, and incident logs.
• Build and update the internal IT knowledge base to enable faster issue resolution and reduce repeat incidents.
• Document all changes to infrastructure, cloud configurations, and access policies in accordance with change management protocols.
8. Collaboration & Project Support
• Work closely with ARDEM’s Project Managers, Operations teams, and client-facing staff to resolve IT dependencies impacting BPO delivery.
• Assist with IT infrastructure upgrades, cloud migrations, and automation initiatives that support ARDEM’s growth.
• Participate in rotational shifts to ensure 24/7 coverage aligned with ARDEM’s three-shift processing operations.
Qualifications & Requirements
Education
• B.Tech in Information Technology
Experience
• 3–5 years of professional experience in network support, IT infrastructure management, or cloud administration.
• Proven track record supporting remote or distributed teams in a BPO, IT services, or technology company environment.
Technical Skills – Required
• AWS Cloud Services: EC2, S3, IAM, VPC, AWS WorkSpaces – hands-on configuration and monitoring.
• Azure Entra ID (Azure AD), Active Directory, Group Policy, and Microsoft 365 administration.
• Windows Server administration: AD DS, DNS, DHCP, File Services, patch management.
• Cisco networking hardware: switches, routers, firewalls – configuration and troubleshooting.
• VPN, VLAN, SSL, and remote access technologies (AnyDesk, RDP, VPN clients).
• Network monitoring tools and log analysis for proactive issue detection.
• Backup and disaster recovery tools and procedures.
Technical Skills – Preferred
• Experience with ARDEM-type BPO cloud platforms or similar multi-tenant cloud environments.
• Familiarity with security frameworks: ISO 27001, SOC 2, HIPAA, GDPR.
• Exposure to automation scripting (PowerShell, Python) for IT operations tasks.
Certifications (Preferred)
AWS Cloud Practitioner (CLF-C02)
CCNA (Cisco Certified Network Associate)
AWS SysOps Administrator
MCSE / Windows Server
Azure Fundamentals (AZ-900)
ITIL v4 Foundation
Microsoft 365
Soft Skills
• Strong analytical and systematic troubleshooting skills with a solution-first mindset.
• Excellent written and verbal communication in English; ability to explain technical issues to non-technical stakeholders.
• Ability to work independently and collaboratively in a fully remote, distributed team environment.
• High sense of accountability, punctuality, and commitment to SLAs critical to BPO operations.
• Willingness to work rotational shifts to support ARDEM’s round-the-clock processing operations.
• Responsible for assisting technology and production team in client deliverables and receipt.
Mandatory Work-from-Home Equipment Requirements
All candidates must confirm that they meet the following minimum home office specifications before selection:
Device Type
Windows Laptop
Operating System
Windows 10 / Windows 11
Screen Size
14 Inches/ preferable to have 2 monitors
Screen Resolution
FHD (1920 × 1080) or higher
Processor
Intel Core i5 (8th Gen or later) or higher
RAM
Minimum 8 GB (Mandatory) – 16 GB preferred
Internet Speed
100 Mbps or higher (dedicated broadband connection)
Remote Tool
AnyDesk (to be installed and configured prior to joining)
Power Backup
UPS / Inverter recommended for uninterrupted connectivity
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
JOB DESCRIPTION:
Location: Pune, Mumbai
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
Job Title: AWS Alliance Manager
Location: Hyderabad (On-Site)
About Us:
Oraczen builds production-grade Agentic AI software that helps enterprises buy, make, and sell better. Our platform connects enterprise systems, understands operational data, and converts it into decision intelligence and action.
We deliver two primary product lines:
Scorpio — Supply chain & procurement decision intelligence
Auron — Customer engagement and revenue intelligence using voice-based AI agents
We work with mid-market and enterprise customers across financial services, manufacturing, retail, and life sciences. Our growth strategy relies heavily on hyperscaler ecosystems — especially AWS — as a primary go-to-market channel.
Role Overview: We are seeking an experienced AWS Alliance Manager with 5–10 years of experience to build and operationalize Oraczen’s partnership with Amazon Web Services (AWS) as a scalable revenue channel.
In this role, you will own the AWS relationship across Partner Development Managers, Solution Architects, and field sellers, and convert the partnership into sourced pipeline and closed deals. You will operate at the intersection of sales, partnerships, and strategy — turning AWS from a technology platform into a predictable distribution channel.
Your ability to influence without authority, align with AWS account plans, and create co sell opportunities will directly impact company growth.
Key Responsibilities: AWS Relationship Development:
Act as the primary point of contact between Oraczen and AWS India. Build strong working relationships with:
• AWS Partner Development Managers (PDMs)
• AWS Solution Architects
• AWS Industry Account Executives
• ISV & Marketplace teams
• Position Oraczen internally within AWS as a credible enterprise ISV partner.
Pipeline Generation:
Identify accounts where Oraczen aligns with AWS account plans. Source and qualify opportunities through AWS sellers.
Drive introductions, meetings, and joint customer engagements.
Enable AWS teams to confidently position Oraczen solutions.
Measure success based on opportunities created and pipeline generated.
Co-Sell Execution:
Register and manage opportunities in AWS ACE.
Coordinate joint customer calls, workshops, and deal cycles.
Maintain account mapping and track partner-sourced pipeline.
Build repeatable co-sell playbooks and engagement processes.
Partner Enablement:
Educate AWS teams on Oraczen use cases:
• Supply chain intelligence
• Procurement & spend analytics
• Sales and service intelligence
• Develop clear messaging for AWS field teams.
• Conduct enablement sessions and partner workshops.
Marketplace & Programs:
Drive customer procurement through AWS Marketplace.Manage AWS programs and incentives including:
• ISV Accelerate
• Co-sell programs
• Funded POCs
• Industry initiatives
• Support enterprise deal acceleration through AWS funding mechanisms.
Qualifications and Skills:
Bachelor’s degree in Business, Technology, or related field. 5–10 years experience in one or more of the following: '
• Cloud alliances
• Enterprise partnerships
• Enterprise software sales
• Hyperscaler ecosystem roles
• Experience working directly with AWS partner ecosystem preferred. Familiarity with AWS ACE, co-sell processes, and Marketplace procurement.
Strong understanding of enterprise SaaS, data platforms, or AI solutions.
Ability to influence cross-organization stakeholders without authority. Excellent communication and executive engagement skills.
Experience working with enterprise customers and account teams. Strong organizational skills with ability to manage multiple deals simultaneously.
Self-driven mindset suited for fast-paced startup environments.
Why Join Us?
Direct impact on pipeline and growth through a strategic hyperscaler channel.
Opportunity to shape Oraczen’s global partner go-to-market motion. High visibility role with leadership and revenue ownership.
Work on real enterprise AI deployments across industries.
Belonging at Oraczen:
We’re building more than just innovative technology — we’re building a culture where people from all backgrounds feel welcomed, valued, and empowered to thrive. Our strength lies in diverse perspectives, and we are committed to cultivating an inclusive environment where everyone can do their best work.
We evaluate applicants based on talent and potential, not background or identity. We welcome candidates of all races, ethnicities, genders, orientations, abilities, and experiences
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to have:
- Prior experience in working with startups or product-based companies
- Experience mentoring tech leads and helping shape engineering culture
- Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture.
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
About NonStop io Technologies
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics
● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.
Qualifications & Skills
● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
● Proficiency in programming languages commonly used for AI/ML. Preferably Python
● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
● Strong understanding of machine learning algorithms, statistics, and data structures
● Experience with data preprocessing, data wrangling, and feature engineering
● Knowledge of deep learning architectures, neural networks, and transfer learning
● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
Job Title: Deployment Lead (Python, Linux, AWS)
Location: Coimbatore
Overview
We are seeking an experienced Deployment Lead to oversee the end-to-end deployment lifecycle of our applications and services. The ideal candidate will have deep expertise in Python, strong Linux administration skills, and hands-on experience with AWS cloud infrastructure. You will work closely with engineering, DevOps, QA, and product teams to ensure reliable, repeatable, and scalable deployments across multiple environments.
Key Responsibilities
- Lead and manage deployment activities for all application releases across development, staging, and production environments.
- Develop and maintain deployment automation, scripts, and tools using Python and shell scripting.
- Own and optimize CI/CD pipelines (e.g., GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline).
- Oversee Linux server administration, including configuration, troubleshooting, performance optimization, and security hardening.
- Design, implement, and maintain AWS infrastructure (EC2, S3, Lambda, IAM, RDS, ECS/EKS, CloudFormation/Terraform).
- Ensure robust monitoring, logging, and alerting using tools such as CloudWatch, Grafana, Prometheus, or ELK.
- Collaborate with developers to improve code readiness for deployment and production reliability.
- Manage environment configurations and ensure consistency and version control across environments.
- Lead incident response during production issues; conduct root-cause analysis and implement long-term fixes.
- Establish and enforce best practices for deployment, configuration management, and operational excellence.
Required Skills & Qualifications
- 5+ years of experience in deployment engineering, DevOps, or site reliability engineering roles.
- Strong proficiency in Python for automation and tooling.
- Advanced experience with Linux systems administration (Ubuntu, CentOS, Amazon Linux).
- Hands-on work with AWS cloud services and infrastructure-as-code (CloudFormation or Terraform).
- Experience with containerization technologies such as Docker and orchestration platforms like ECS, EKS, or Kubernetes.
- Strong understanding of CI/CD tools and automated deployment strategies.
- Familiarity with networking concepts: DNS, load balancers, VPCs, firewalls, VPN, and routing.
- Expertise with monitoring, alerting, and logging solutions.
- Strong problem-solving and analytical skills; able to lead troubleshooting efforts.
- Excellent communication and leadership abilities.
Qualification- BTech-CS (2025 graduate only)
Joining: Immediate Joiner
Job Type: Trainee
Work Mode: Remote
Working Days: Monday to Friday
Shift (Rotational – based on project need):
· 5:00 PM – 2:00 AM IST
· 6:00 PM – 3:00 AM IST
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for fresher who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills:
· Basic understanding of AI / Machine Learning concepts
· Exposure to AWS (deployment or cloud fundamentals)
· PHP development
· WordPress development and customization
· JavaScript (ES5 / ES6+)
· jQuery
· AJAX calls and asynchronous handling
· Event handling
· HTML5 & CSS3
· Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Job Details
- Job Title: Android Developer
- Industry: IT- Services
- Function - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Delhi
- CTC Range: Best in Industry
Criteria:
· Strong technical background in Android application development and Kotlin
· Looking candidates having 5+ years of experience.
· Need candidates from Delhi NCR Only.
· All Academic backgrounds acceptable (except BCA).
· Immediate Joiners Preferred
· Candidate must have some experience working with IoT devices.
· Candidate should have experience working with Camera model X.
· Candidate's Academic scores must be 70% or above.
· Candidate having fluent communication will be an added advantage.
Job Description
About the Role:
Senior Android Team Lead will be responsible for testing, QC, debugging support for various Android and Java software/servers for products developed or procured by the company. The role includes debugging integration issues, handling on-field deployment challenges, and suggesting improvements or structured solutions. The candidate will also be responsible for scaling the architecture. You will work closely with other team members including Web Developers, Software Developers, Application Engineers, and Product Managers to test and deploy existing products. You will act as a Team Lead to coordinate and organize team efforts toward successful completion or demo of applications. This includes implementing projects from conception to deployment.
Responsibilities:
â— Working with the Android SDK, Java, Kotlin, NDK
â— Handling different Android versions and screen sizes
â— Applying Android UI design principles, patterns, and best practices
Requirements:
â— Strong technical background in Android application development and Kotlin
â— Solid programming skills
â— Detail-oriented with strong attention to specifics
â— Excellent written and verbal communication skills
â— Strong analytical and quick problem-solving ability
â— Ability to quickly document requirements from open discussions
â— Fast typing skills for documentation and communication
â— Familiarity with JIRA, EPICs, Excel, Google Sheets, and Agile methodologies
â— Team player with leadership qualities
â— Decision-making ability and team management skills
â— Interest in working in a startup environment with cutting-edge products
â— Experience with design and architecture patterns
â— Understanding of testing processes, debugging, code versioning, and repositories
â— UI/UX experience
â— Strong knowledge of Java & Kotlin
â— Software development experience with strong coding skills
â— Experience building services for data delivery to mobile clients
â— Experience with relational and non-relational databases
â— Knowledge of REST and JSON data handling
â— Experience with libraries like Retrofit, RxJava, Dagger 2, Lottie
â— Server integration (REST endpoints)
â— Experience with AWS stack and Linux
â— Apps shipped and available on Google Play
â— Backend API development
â— Familiarity with Android Studio, Eclipse IDE
â— Good knowledge of mobile hardware, software, and operating systems
â— Willingness to work in a fast-paced startup environment
â— Strong oral communication and presentation skills
â— Team-oriented, with a positive approach to technology and engineering
â— Result-oriented with a focus on efficiency and timeliness
â— Strong self-awareness and ability to work under deadlines
â— Proficiency in Microsoft Project, PowerPoint, Excel, Word
â— Willingness to mentor and manage team members
â— Willing to travel 5–10% of the time for demos, training, and collaboration
Preferred Background:
â— Understanding of Artificial Intelligence and Machine Learning
â— B.S. / M.S. in Computer Science, Electrical, or Electronics Engineering
â— 5+ years’ experience with Android, Java Server, JSP
â— Experience with Virtual Reality and Augmented Reality
â— Familiarity with Test-Driven Development
â— Background in CS or ECE
â— Python experience is a big plus
â— iOS development knowledge (not mandatory)
â— Strong foundation in data structures and algorithms
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
Job Details
- Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Job Description
Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark
Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
We are hiring an Associate Technical Architect with strong expertise in AWS-based Data Platforms to design scalable data lakes, warehouses, and enterprise data pipelines while working with global teams.
Key Responsibilities
- Design and implement scalable data warehouse, data lake, and lakehouse architectures on AWS
- Build resilient and modular data pipelines using native AWS services
- Architect cloud-based data platforms and evaluate service trade-offs
- Optimize large-scale data processing and query performance
- Collaborate with global cross-functional teams (Engineering, QA, PMs, Stakeholders)
- Communicate technical roadmap, risks, and mitigation strategies
Must-Have Skills
- 8+ years of experience in AWS Data Engineering / Data Architecture
- Hands-on experience with AWS services:
- Amazon S3
- AWS Glue
- AWS Lambda
- Amazon EMR
- AWS Kinesis (Streams & Firehose)
- AWS Step Functions / MWAA
- Amazon Redshift (Spectrum & Serverless)
- Amazon Athena
- Amazon RDS
- AWS Lake Formation
- AWS DMS, EventBridge, SNS, SQS
- Strong programming skills in Python & PySpark
- Advanced SQL with query optimization & performance tuning
- Deep understanding of:
- MPP databases
- Partitioning & indexing strategies
- Data modeling (Dimensional, Normalized, Lakehouse)
- Experience building resilient ETL/data pipelines
- Knowledge of AWS fundamentals:
- Security
- Networking
- Disaster Recovery
- Scalability & resilience
- Experience with on-prem → AWS migrations
- AWS Certification (Solution Architect Associate / Data Engineer Associate)
Good-to-Have Skills
- Domain experience: FSI / Retail / CPG
- Data governance & virtualization tools:
- Collibra
- Denodo
- QuickSight / Power BI / Tableau
- Exposure to:
- Terraform (IaC)
- CI/CD pipelines
- SSIS
- Apache NiFi, Hive, HDFS, Sqoop
- Data Mesh architecture
- Experience with NoSQL databases:
- DynamoDB
- MongoDB
- DocumentDB
Soft Skills
- Strong problem-solving and analytical mindset
- Excellent communication and stakeholder management skills
- Ability to translate technical concepts into business outcomes
- Experience working with distributed/global teams
As a Senior Data Engineer, you will be responsible for building and delivering a Lakehouse-based data pipeline. This is a hands-on role focused on implementing real-time and batch data ingestion, processing, and delivery workflows, while ensuring strong monitoring, observability, and data quality across the entire pipeline.
Must-Have Skills
- 3+ years of hands-on experience building large-scale data pipelines
- Strong experience with Spark Streaming, AWS Glue, and EMR for real-time and batch processing
- Proficiency in PySpark/Python, including building Kafka producers for data ingestion
- Experience working with Confluent Kafka and Spark Streaming for ingestion from on-premise sources
- Solid understanding of AWS services including:
- S3
- Redshift
- Glue
- CloudWatch
- Secrets Manager
- Experience working with Medallion Architecture and hybrid data destinations (e.g., Redshift + on-prem Oracle)
- Ability to implement monitoring dashboards and observability using tools like CloudWatch or Datadog
- Strong SQL skills for data validation and job-level metrics development
- Experience building alerting mechanisms for pipeline failures and performance issues
- Strong collaboration and communication skills
- Proven ownership mindset — driving deliverables from design to deployment
- Experience mentoring junior engineers, conducting code reviews, and guiding best practices
- AWS Certified Data Engineer – Associate (preferred/required)
Good-to-Have Skills
- Experience with orchestration tools such as Apache Airflow or AWS Step Functions
- Exposure to Big Data ecosystem tools:
- Sqoop
- HDFS
- Hive
- NiFi
- Exposure to Terraform for infrastructure automation
- Familiarity with CI/CD pipelines for data workflows
Job Details
- Job Title: Delivery Manager
- Industry: IT- Services
- Function - Information technology (IT)
- Experience Required: 15-18 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Preferred Skills: Excellent Communication & Stakeholder Management, Delivery Leadership, Scaled Agile, Program Governance, Cybersecurity Delivery, Executive Communication
Criteria:
1. 15+ years of experience in IT Services / System Integration / Cybersecurity services companies.
2. Must have handled enterprise client implementation projects (not internal product development only).
3. Proven ownership of end-to-end project delivery including transition to support/AMC.
4. Managed multi-stream technology implementation programs
5. Experience handling BFSI / Ecommerce / Retail / Enterprise clients.
6. Strong executive stakeholder handling and governance reporting
7. Strong hands-on exposure to SDLC delivery models
8. Prior experience delivering Cybersecurity / IAM / Cloud Security / Infrastructure / Enterprise IT projects.
9. Clear understanding of delivery governance, risk management, and milestone control.
10. Candidate must have PMP, AWS certifications for this role.
Note- Only Male candidates will be considered for this role.
Job Description
Head – Project / Delivery Management
Role Overview
We are seeking a highly experienced Project / Delivery Leader responsible for end-to-end delivery of all organizational projects, ensuring quality, timeliness, cost efficiency, and customer satisfaction.
This role demands strong expertise in scaled Agile delivery, SDLC management, cybersecurity projects, stakeholder leadership, and large-scale program execution.
Key Roles & Responsibilities
- Responsible for delivery of all projects across the organization.
- Lead project management across all SDLC delivery methodologies.
- Ensure successful project completion, handover, and future opportunity enhancement.
- Ensure seamless transition of implementation projects to support.
- Manage large-scale programs and multi-team environments.
- Strong decision-making and problem-solving capability.
- Expert client stakeholder management and executive communication.
- Present roadmap status, risks, and issues to executive leadership and mitigate roadblocks efficiently.
- Keep teams aligned with process standards at every stage.
- Monitor project progress and drive performance improvements.
- Prepare and present status reports to stakeholders.
- Own Cost / Quality / Timelines / Cybersecurity deliverables for allocated projects.
- Maximize resource utilization and proactive upskilling based on future demand.
- Ensure Customer Satisfaction (CSAT ownership).
- Complete delivery team management.
- Attrition optimization and team stability management.
Mandatory Skills & Experience
- 15+ years of proven experience in Project/Delivery Management (minimum exposure to Business Analysis).
- Strong expertise in scaling Lean & Agile practices across large development programs.
- Experience managing scaled Agile frameworks such as SAFe, DAD, Scrum, Kanban, or other iterative models at scale.
- Working knowledge of all SDLC delivery models.
- Excellent people and project management skills.
- Strong communication and executive presentation skills.
- Strong analytical and problem-solving ability.
- Experience working in small-scale organizations handling large enterprise clients.
- Proficiency in productivity tools – MS Excel, MS PowerPoint, MS Project.
- Prior experience handling Cybersecurity projects in BFSI, Ecommerce, Retail domains.
Educational Qualifications
- Engineering (CSE/ECE/EEE preferred) + MBA from reputable institutes.
- MBA specialization in Systems / Organizational Management / IT Business Management preferred.
- Management programs from reputed institutes such as IIMs are an added advantage.
- Entire education from English medium.
Additional Requirements:
- Male candidate only
- Clean shave and business formals (Grooming Policy)
- Work from Office only
Certifications
Mandatory:
- PMP
- AWS Certification
Good to Have:
- ITIL
- Certified Scrum Master
- PRINCE2
- CISSP
- CISA
🚀 Job Title : Backend Engineer (Go / Python / Java)
Experience : 3+ Years
Location : Bangalore (Client Location – Work From Office)
Notice Period : Immediate to 15 Days
Open Positions : 4
Working Days : 6 Days a Week
🧠 Job Summary :
We are looking for a highly skilled Backend Engineer to build scalable, reliable, and high-performance systems in a fast-paced product environment.
You will own large features end-to-end — from design and development to deployment and monitoring — while collaborating closely with product, frontend, and infrastructure teams.
This role requires strong backend fundamentals, distributed systems exposure, and a mindset of operational ownership.
⭐ Mandatory Skills :
Strong backend development experience in Go / Python (FastAPI) / Java (Spring Boot) with hands-on expertise in Microservices, REST APIs, PostgreSQL, Redis, Kafka/SQS, AWS/GCP, Docker, Kubernetes, CI/CD, and strong DSA & System Design fundamentals.
🔧 Key Responsibilities :
- Design, develop, test, and deploy backend services end-to-end.
- Build scalable, modular, and production-grade microservices.
- Develop and maintain RESTful APIs.
- Architect reliable distributed systems with performance and fault tolerance in mind.
- Debug complex cross-system production issues.
- Implement secure development practices (authentication, authorization, data integrity).
- Work with monitoring dashboards, alerts, and performance metrics.
- Participate in code reviews and enforce engineering best practices.
- Contribute to CI/CD pipelines and release processes.
- Collaborate with product, frontend, and DevOps teams.
✅ Required Skills :
- Strong proficiency in Go OR Python (FastAPI) OR Java (Spring Boot).
- Hands-on experience building Microservices-based architectures.
- Strong understanding of REST APIs & distributed systems.
- Experience with PostgreSQL and Redis.
- Exposure to Kafka / SQS or other messaging systems.
- Hands-on experience with AWS or GCP.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD pipelines.
- Strong knowledge of Data Structures & System Design.
- Ability to independently own features and solve ambiguous engineering problems.
⭐ Preferred Background :
- Experience in product-based companies.
- Exposure to high-throughput or event-driven systems.
- Strong focus on code quality, observability, and reliability.
- Comfortable working in high-growth, fast-paced environments.
🧑💻 Interview Process :
- 1 Internal Screening Round
- HR Discussion (Project & Communication Evaluation)
- 3 Technical Rounds with Client
This is a fresh requirement, and interviews will be scheduled immediately.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
- Strong understanding of Core Python, data structures, OOPs, exception handling, and logical problem-solving.
- Experience in at least one Python framework (FastAPI preferred, Flask/Django acceptable).
- Good knowledge of REST API development and API authentication (JWT/OAuth).
- Experience with SQL databases (MySQL/PostgreSQL) & NoSQL databases (MongoDB/Firestore).
- Basic understanding of cloud platforms (GCP or AWS).
- Experience with Git, branching strategies, and code reviews.
- Solid understanding of performance optimization and writing clean, efficient code.
- Develop, test, and maintain high-quality Python applications using FastAPI (or Flask/Django).
- Design and implement RESTful APIs with strong understanding of request/response cycles, data validation, and authentication.
- Work with SQL (MySQL/PostgreSQL) and NoSQL (MongoDB/Firestore) databases, including schema design and query optimization.
- Experience with Google Cloud (BigQuery, Dataflow, Notebooks) will be a strong plus.
- Work with cloud environments (GCP/AWS) for deployments, storage, logging, etc.
- Use version control tools such as Git/BitBucket for collaborative development.
- Support and build data pipelines using Dataflow/Beam and BigQuery if required.
- Experience with GCP services like BigQuery, Dataflow (Apache Beam), Cloud Functions, Notebooks etc
- Good to have Exposure to microservices architecture.
- Familiarity with Redis, Elasticsearch, or message queues (Pub/Sub, RabbitMQ, Kafka).
Company Description
Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes.
Location : Mumbai – Tech Data Office
Experience : 5 - 8 years.
Duration : 1-year contract (extendable)
Job Overview
We are seeking a highly skilled Sales Engineer (L2/L3) with in-depth expertise in Palo Alto Networks solutions. This role requires designing, implementing, and supporting cutting-edge network and security solutions to meet customers' technical and business needs. The ideal candidate will have strong experience in sales engineering and advanced skills in deploying, troubleshooting, and optimizing Palo Alto products and related technologies, with the ability to assist in implementation tasks when required.
Key Responsibilities
Solution Design & Technical Consultation:
- Collaborate with sales teams and customers to understand business and technical requirements.
- Design and propose solutions leveraging Palo Alto Networks technologies, including Next-Generation Firewalls (NGFW), Prisma Access, Panorama, SD-WAN, and Threat Prevention.
- Prepare detailed technical proposals, configurations, and proof-of-concept (POC) demonstrations tailored to client needs.
- Optimize existing customer deployments, ensuring alignment with industry best practices.
Customer Engagement & Implementation:
- Present and demonstrate Palo Alto solutions to stakeholders, addressing technical challenges and business objectives.
- Conduct customer and partner workshops, enablement sessions, and product training.
- Provide post-sales support to address implementation challenges and fine-tune deployments.
- Lead and assist with hands-on implementations of Palo Alto Networks products when required.
Support & Troubleshooting:
- Provide L2-L3 level troubleshooting and issue resolution for Palo Alto Networks products, including advanced debugging and system analysis.
- Assist with upgrades, migrations, and integration of Palo Alto solutions with other security and network infrastructures.
- Develop runbooks, workflows, and documentation for post-sales handover to operations teams.
Partner Enablement & Ecosystem Management:
- Collaborate with channel partners to build technical competency and promote adoption of Palo Alto solutions.
- Support certification readiness and compliance for both internal and partner teams.
- Participate in events, workshops, and seminars to showcase technical expertise.
Skills and Qualifications
Technical Skills:
- Advanced expertise in Palo Alto Networks technologies, including NGFW, Panorama, Prisma Access, SD-WAN, and GlobalProtect.
- Strong knowledge of networking protocols (e.g., TCP/IP, BGP, OSPF) and security frameworks (e.g., Zero Trust, SASE).
- Proficiency in troubleshooting and root-cause analysis for complex networking and security issues.
- Experience with security automation tools and integrations (e.g., API scripting, Ansible, Terraform).
Soft Skills:
- Excellent communication and presentation skills, with the ability to convey technical concepts to diverse audiences.
- Strong analytical and problem-solving skills, with a focus on delivering customer-centric solutions.
- Ability to manage competing priorities and maintain operational discipline under tight deadlines.
Experience:
- 5+ years of experience in sales engineering, solution architecture, or advanced technical support roles in the IT security domain.
- Hands-on experience in designing and deploying large-scale Palo Alto Networks solutions in enterprise environments.
Education and Certifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications such as PCNSA, PCNSE, or equivalent vendor certifications (e.g., CCNP Security, NSE4) are highly preferred.
.Net Full Stack Developer
Experience: 6-8 years of experience withbachelor's degree or equivalent
Overview:
Seeking an experienced Full Stack Developer with strong engineering practices, problem-solving
abilities, and excellent communication skills.
Required Skills:
●.NET Core, C#, SQL, Unit Testing, Design Patterns
●Message Queues (RabbitMQ or similar)
●Experience on working on SQL Server
●Jenkins, Git, Testing frameworks
●SCRUM/Agile methodologies
●Time management across multiple projects
Preferred Skills:
●AWS services (S3, API Gateway, SNS, SQS, RDS, CloudWatch)
●Docker Containers, Kubernetes
●IT Infrastructure understanding
●MongoDB/NoSQL databases
●Frontend frameworks (Stencil JS, Angular, React)
●Microservices architecture
Key Responsibilities:
●Deliver high-quality solutions
●Work independently and collaboratively
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
About Company:
Snapsight is an AI-powered platform that delivers real-time event summaries in 75+ languages. We work with conferences worldwide and won the 2024 Skift Award for Most Innovative Event Tech. We're an early-stage startup scaling fast.
Join us if you want to become part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Location: Remote/Work From Home
What you'll be doing:
- Writing reusable, testable, and efficient code in Node.js for back-end services.
- Ensuring optimal and high-performance code logic for the data from/to the database.
- Collaborating with front-end developers on the integrations.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Preparing technical specification documents for the developed features.
- Providing technical recommendations and suggesting improvements to the product.
- Writing unit test cases for APIs.
- Documenting code standards and practicing it.
- Staying updated on the advancements in the field of Node.js development.
- Should be open to new challenges and be comfortable in taking up new exploration tasks.
Skills:
- 3-5 years of strong proficiency in Node.js and its core principles.
- Experience in test-driven development.
- Experience with NoSQL databases like MongoDB is required
- Experience with MySQL database
- RESTful/GraphQL API design and development
- Docker and AWS experience is a plus
- Extensive knowledge of JavaScript, PHP, web stacks, libraries, and frameworks.
- Strong interpersonal, communication, and collaboration skills.
- Exceptional analytical and problem-solving aptitude
- Experience with a version control system like Git
- Knowledge about the Software Development Life Cycle Model, secure development best practices and standards, source control, code review, build and deployment, continuous integration
We are looking for an experienced DevOps Architect with strong expertise in telecom environments (OSS/BSS, 4G/5G core, network systems). The candidate will design and implement scalable, highly available, and automated DevOps solutions to support telecom-grade applications and infrastructure.
Responsibilities:
- Design and implement DevOps architecture for telecom applications (OSS/BSS, mediation systems, billing platforms)
- Architect CI/CD pipelines using Jenkins, GitLab, or Azure DevOps
- Manage cloud infrastructure on Amazon Web Services, Microsoft Azure, or hybrid telecom data centers
- Implement containerization using Docker and orchestration with Kubernetes
- Design Infrastructure as Code (IaC) using Terraform
- Ensure high availability, disaster recovery, and zero-downtime deployment strategies
- Automate deployments for 4G/5G core network functions (CNFs/VNFs)
- Implement monitoring solutions using Prometheus, Grafana, and ELK Stack
- Work closely with network engineering and telecom operations teams
- Ensure compliance with telecom-grade security standards
Role Description
This is a full-time on-site role for a Python Full Stack Developer located in Pune. You will be responsible for end-to-end development of scalable, AI-driven web applications. Day-to-day tasks involve architecting asynchronous backend services using Python and FastAPI, building dynamic user interfaces with ReactJS, and managing cloud infrastructure on AWS. You will collaborate with data scientists and product teams to integrate AI models into enterprise solutions while ensuring high performance and reliability.
Key Responsibilities
1. Design and develop high-performance asynchronous APIs using Python and FastAPI.
2. Build responsive, interactive frontends using ReactJS, HTML, CSS, and Tailwind CSS.
3. Implement distributed task queues and caching mechanisms using Celery and Redis.
4. Architect and optimize databases, managing both structured (PostgreSQL) and unstructured (MongoDB) data.
5. Deploy and manage infrastructure on AWS (EC2, Lambda, S3) and maintain CI/CD pipelines for automated deployment.
6. Integrate AI/ML models into production workflows and optimize system performance for scalability.
7. Ensure application security, data privacy, and code quality through best practices and regular testing.
Required Skills & Qualifications
1. 3–5 years of experience in full-stack development with a strong focus on Python.
2. Proficiency in FastAPI and deep understanding of asynchronous programming (asyncio).
3. Solid experience with ReactJS, HTML, CSS, JavaScript, and Tailwind CSS.
4. Hands-on experience with Celery and Redis for background task processing.
5. Working knowledge of AWS services and containerization tools like Docker.
6. Proficiency in database management using PostgreSQL and MongoDB.
7. Experience setting up CI/CD pipelines (Jenkins, GitHub Actions, etc.) and version control (Git).
8. Strong understanding of RESTful API design, microservices, and security best practices.
🚀 We’re Hiring: Senior Full Stack Engineer (On-Call Support) 🚀
Work Mode-Remote
Shift Timings-PST
Working hours-9hours(including a 1 hour of break)
Are you a seasoned Full Stack Engineer who enjoys solving real-world production challenges and being the go-to expert when it matters most? This role is for you! 💡
Role Overview
We’re looking for 3 Senior Resources to join our On-Call Support Team, ensuring platform stability and rapid issue resolution across backend, frontend, and infrastructure.
Tech Stack
Node.js (NestJS)
React.js (Next.js)
React Native
PostgreSQL
AWS (Hybrid with On-Premise)
Linux
Docker Swarm
Portainer
What You’ll Do
Provide on-call support for production systems
Troubleshoot and resolve high-priority issues
Collaborate with senior engineers to maintain system reliability
Work across backend, frontend, and infrastructure layers
Ensure uptime, performance, and scalability of applications
What We’re Looking For
Strong experience with modern JavaScript frameworks
Hands-on knowledge of cloud + on-prem environments
Solid understanding of containerized deployments
Excellent problem-solving and debugging skills
Comfortable working in on-call support rotations
We are looking for an experienced AI Technical Architect who can design and lead end-to-end AI/ML solutions, define scalable architecture, and guide development teams in building intelligent applications aligned with business goals.
Key Responsibilities:
- Design AI/ML architecture and technical solutions.
- Lead AI strategy, model deployment, and integration.
- Build scalable AI pipelines and cloud-based solutions.
- Work closely with data scientists, developers, and stakeholders.
- Ensure best practices in MLOps, automation, and performance optimization.
- Evaluate new AI technologies and frameworks.
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
Job Summary
We are looking for a skilled AWS DevOps Engineer with 5+ years of experience to design, implement, and manage scalable, secure, and highly available cloud infrastructure. The ideal candidate should have strong expertise in CI/CD, automation, containerization, and cloud-native deployments on Amazon Web Services.
Key Responsibilities
- Design, build, and maintain scalable infrastructure on AWS
- Implement and manage CI/CD pipelines for automated build, test, and deployment
- Automate infrastructure using Infrastructure as Code (IaC) tools
- Monitor system performance, availability, and security
- Manage containerized applications and orchestration platforms
- Troubleshoot production issues and ensure high availability
- Collaborate with development teams for DevOps best practices
- Implement logging, monitoring, and alerting systems
Required Skills
- Strong hands-on experience with AWS services (EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch)
- Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI/CD
- Infrastructure as Code using Terraform / CloudFormation
- Containerization using Docker
- Container orchestration using Kubernetes / EKS
- Scripting knowledge in Python / Bash / Shell
- Experience with monitoring tools (CloudWatch, Prometheus, Grafana)
- Strong understanding of Linux systems and networking
- Experience with Git and version control
Good to Have
- Experience with configuration management tools (Ansible, Chef, Puppet)
- Knowledge of microservices architecture
- Experience with security best practices and DevSecOps
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience working in Agile/Scrum teams
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
Job Role: Teamcenter Admin
• Teamcenter and CAD (NX) Configuration Management
• Advanced debugging and root-cause analysis beyond L2
• Code fixes and minor defect remediation
• AWS knowledge, which is foundational to our Teamcenter architecture
• Experience supporting weekend and holiday code deployments
• Operational administration (break/fix, handle ticket escalations, problem management
• Support for project activities
• Deployment and code release support
• Hypercare support following deployment, which is expected to onboard approximately 1,000+ additional users
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Global digital transformation solutions provider.
JOB DETAILS:
* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka
* Industry: Global digital transformation solutions provider
* Salary: Best in Industry
* Experience: 5-8 years
* Location: Hyderabad
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
- Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
- Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
- Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
- Ensure fault-tolerant, scalable, and high-performance data processing systems.
Cloud Infrastructure Development
- Build and manage scalable, cloud-native data infrastructure on AWS.
- Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.
Real-Time & Batch Data Integration
- Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
- Ensure consistency, data quality, and a unified view across multiple data sources and formats.
Data Analysis & Insights
- Partner with business teams and data scientists to understand data requirements.
- Perform in-depth data analysis to identify trends, patterns, and anomalies.
- Deliver high-quality datasets and present actionable insights to stakeholders.
CI/CD & Automation
- Implement and maintain CI/CD pipelines using Jenkins or similar tools.
- Automate testing, deployment, and monitoring to ensure smooth production releases.
Data Security & Compliance
- Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
- Implement data governance practices ensuring data integrity, security, and traceability.
Troubleshooting & Performance Tuning
- Identify and resolve performance bottlenecks in data pipelines.
- Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.
Collaboration & Cross-Functional Work
- Work closely with engineers, data scientists, product managers, and business stakeholders.
- Participate in agile ceremonies, sprint planning, and architectural discussions.
Skills & Qualifications
Mandatory (Must-Have) Skills
- AWS Expertise
- Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
- Strong understanding of cloud-native data architectures.
- Big Data Technologies
- Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
- Experience with Apache Spark and Apache Kafka in production environments.
- Data Frameworks
- Strong knowledge of Spark DataFrames and Datasets.
- ETL Pipeline Development
- Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
- Database Modeling & Data Warehousing
- Expertise in designing scalable data models for OLAP and OLTP systems.
- Data Analysis & Insights
- Ability to perform complex data analysis and extract actionable business insights.
- Strong analytical and problem-solving skills with a data-driven mindset.
- CI/CD & Automation
- Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
- Familiarity with automated testing and deployment workflows.
Good-to-Have (Preferred) Skills
- Knowledge of Java for data processing applications.
- Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
- Familiarity with data governance frameworks and compliance tooling.
- Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
- Exposure to cost optimization strategies for large-scale cloud data platforms.
Skills: big data, scala spark, apache spark, ETL pipeline development
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer
F2F Interview: 14th Feb 2026
3 days in office, Hybrid model.
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹15,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
JOB DESCRIPTION:
Role: GRC Consultant
(1–7 Years Experience)
📍 Location: Anna Salai, Mount Road, Chennai, India
🕒 Full-Time |On-site
Pentabay Software Solutions, a fast-growing IT services company based in Chennai, is hiring a GRC Consultant with 1–4 years of experience in cloud security and GRC (Governance, Risk & Compliance). This is an exciting opportunity for early-career professionals to work with modern cloud platforms, global clients, and industry-standard security frameworks.
🔑 Key Responsibilities
Secure Cloud Infrastructure
Design and deploy secure environments ensuring alignment with HIPAA, GDPR, PCI DSS, PHI, and PII standards.
Firewall & Endpoint Protection
Manage and configure firewalls, WAFs, and endpoint security solutions to protect cloud and hybrid infrastructures.
GRC & Compliance Support
Contribute to Governance, Risk, and Compliance (GRC) initiatives by helping develop and implement policies, assess risks, maintain audit trails, and support compliance reporting.
Security Audits & Risk Assessments
Assist in vulnerability assessments, penetration tests, and audits to identify and remediate risks, ensuring continuous improvement of security posture.
✅ Required Skills & Qualifications:
- 1–7years of experience in cybersecurity, cloud security, or related areas.
- Working knowledge of GRC frameworks, including policy creation, risk identification, and compliance reporting.
- Understanding of HIPAA, GDPR, PCI DSS, and data protection best practices.
- Familiarity with firewall management, network segmentation, and endpoint protection tools.
- Hands-on expertise with cloud platforms: AWS, Azure, GCP.
- Experience working with Canadian or international clients is an added advantage.
- Strong analytical mindset and good written and verbal communication skills
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.













