50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 6-9 years
- Working Days: 5 days/week
- Job Location: Bengaluru
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
JOB DESCRIPTION:
Location: Pune, Mumbai
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a Senior Java Developer, you will utilize your extensive Java programming skills and expertise to design and develop robust and scalable applications. You will collaborate with cross- functional teams, provide technical leadership, and contribute to the entire software development life cycle. With your deep understanding of Java technologies and frameworks, you will ensure the delivery of high-quality solutions that meet the project requirements and adhere to coding standards.
Role Responsibilities:
- Discuss new features and collaborate with the development and UX team, commercial product manager and product owner to get functionalities specified and implemented.
- Agree the technical implementation with involved component owners and estimate its work effort.
- Write great code and do code reviews for other engineers
- Implement, test and demonstrate new product features in an agile process.
- Develop complete sets of functionalities including the backend and frontend.
- Create new microservices, or migrate existing services, to run on a cloud infrastructure
- Work on further usability, performance improvements or quality assurance, including bug fixes and test automation.
- Watch out for potential security issues and fix them, or better avoid them altogether
Role requirements:
- BTech computer science or equivalent
- Java development skills with at least 3 to 5 years of experience. Knowledge of the most popular java libraries and frameworks: JPA, Spring, Kafka, etc
- Have a degree in computer science, or a similar background, or you just have enough professional experience to blow right through all your challenges
- Are a great communicator, analytic, goal-oriented, quality-focused, yet still agile person who likes to work with software engineers; you will interact a lot with architects, developers from other teams, component owners and system engineers
- Have a clear overview of all layers in computer software development, including REST APIs and how to make and integrate them in our products
- Have Java server-side development and SQL (PostgreSQL) and nice to have NOSQL (Mongo Db or Dynamo DB) database knowledge.
- Are open to pick-up innovative technologies as needed by the team. Have or want to build experience with cloud and DevOps infrastructure (like Kubernetes, Docker, Terraform, Concourse, etc.)
- Speak English fluently
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Location: Bangalore, India
Company: CyberWarFare Labs
About Us:
CyberWarFare Labs is an Ed-Tech Cyber Security Focused Platform which is totally engrossed in solving the problem of Cybersecurity by providing real-time hands-on manner solutions to problems of B2C & B2B Audience.
Job Overview:
We are looking for a DevOps / Cloud Infrastructure Intern to support our engineering team in managing cloud environments, automation, and infrastructure. This role will provide hands-on exposure to cloud platforms, containerization, networking, and DevOps practices. The intern will work closely with developers, QA, and operations teams to learn how modern infrastructure is provisioned, secured, and optimized for scalability and reliability.
Qualifications and requirements
- Pursuing/Graduated with B.S/B.E /B.Tech/BCA /M.Tech/MCA degree in Computer Science or related field.
- Basic understanding of cloud platforms (AWS, Azure, GCP).
- Familiarity with Docker, Kubernetes, and IaC tools (Terraform/Ansible/CloudFormation) is a plus.
- Knowledge of scripting (Python, Bash, PowerShell).
- Fundamental networking knowledge (IP, routing, subnetting).
- Basic understanding of DevOps, DevSecOps, and CI/CD practices.
- Good problem-solving and collaboration skills.
- Ability to document work clearly and accurately.
Role and Responsibility
- Assist in provisioning and managing infrastructure using Terraform, Ansible, CloudFormation.
- Support deployment and management of applications on AWS, Azure, GCP with Docker & Kubernetes.
- Contribute to CI/CD pipelines and integrate basic DevSecOps practices.
- Write automation scripts in Python, Bash, PowerShell to optimize workflows.
- Learn and support network design (LAN/WAN/WLAN, IP addressing, routing).
- Help monitor performance and troubleshoot issues using tools like Wireshark, ping, traceroute, SNMP.
- Maintain and update documentation (diagrams, configs, topology).
Job Title : Java Backend Developer
Experience : 3 – 8 Years
Location : Pune (Onsite) (Pune candidates Only)
Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)
About the Role :
We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.
The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.
Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.
Key Responsibilities :
- Design, develop, and maintain backend microservices and REST APIs
- Implement data persistence using relational and NoSQL databases
- Ensure performance, scalability, and security of backend systems
- Integrate observability and monitoring tools for production environments
- Work within CI/CD pipelines and containerized deployments
- Collaborate with DevOps, QA, and product teams for feature delivery
- Troubleshoot, optimize, and improve existing modules and services
Mandatory Skills :
- Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
- Databases : MySQL, MongoDB
- Observability : Prometheus, Grafana, Spring Actuators
- Cloud Technologies : AWS
- Containerization Tools : Docker
- CI/CD Tools : Jenkins, GitHub Actions
- Version Control : GitHub
- Operating Systems : Windows 7, Linux
Nice to Have :
- Strong analytical and debugging abilities
- Experience working in Agile/Scrum environments
- Good communication and collaborative skills
Hands-on experience implementing and managing DLP solutions in AWS and Azure
Strong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)
Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS apps
Proficient in monitoring, investigating, and remediating data leakage incidents
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes preferred (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
SimplyFI is a fast-growing AI and blockchain-powered product company transforming trade
finance and banking through digital innovation. We are looking for a Full Stack Tech Lead with
strong expertise in ReactJS (primary) and solid working knowledge of Python (secondary) to join
our team in Thane, Mumbai.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications with ReactJS as the primary
- technology.
- Build and integrate backend services using Python (Flask/Django/FastAPI).
- Develop and manage RESTful APIs for system integration.
- Collaborate on AI-driven product features and support machine learning model integration
- when required.
- Work closely with DevOps teams to deploy, monitor, and optimize applications on AWS.
- Ensure application performance, scalability, security, and code quality.
- Collaborate with product managers, designers, and QA teams to deliver high-quality product
- features.
- Write clean, maintainable, and testable code following best practices.
- Participate in agile processes—code reviews, sprint planning, and daily standups.
Required Skills & Qualifications
- Strong hands-on experience with ReactJS, including hooks, state management, Redux, and API
- integrations.
- Proficient in backend development using Python with frameworks like Flask, Django, or FastAPI.
- Solid understanding of RESTful API design and secure authentication (OAuth2, JWT).
- Experience working with databases: MySQL, PostgreSQL, MongoDB.
- Familiarity with microservices architecture and modern software design patterns.
- Experience with Git, CI/CD pipelines, Docker, and Kubernetes.
- Strong problem-solving, debugging, and performance optimization skills.
Job Title: Java Developer (Full-time)
Location: Pune (Onsite)
Experience Required: 3+ Years
Working Days: 5 Days (Mon to Fri)
Key Responsibilities:
- Design, develop, and deploy scalable microservices using Java and Spring Boot.
- Implement RESTful APIs and integrate with external systems and databases.
- Build and manage services on AWS Cloud using components like ECS, Lambda, S3, RDS,
- and API Gateway.
- Collaborate with DevOps to integrate CI/CD pipelines for automated builds, tests, and
- deployments.
- Ensure application performance, reliability, and security in a cloud-native environment.
- Participate in code reviews, troubleshooting, and performance optimization.
- Work closely with architecture, QA, and product teams to deliver high-quality solutions.
Required Skills & Experience:
- Strong proficiency in Java, Spring Boot, and microservice architecture.
- Hands-on experience with AWS Cloud services (ECS, EKS, Lambda, RDS, CloudWatch,etc.).
- Knowledge of Docker, Kubernetes, and CI/CD tools (Jenkins, GitLab, or AWS
- CodePipeline).
- Experience with REST APIs, JSON, and message brokers (Kafka, RabbitMQ, or SNS/SQS).
- Proficiency in SQL and experience with relational databases (Oracle, MySQL, or
- PostgreSQL).
- Familiarity with security best practices, monitoring, and logging in the cloud.
About Upsurge Labs
We're building the infrastructure and products that will shape how human civilization operates in the coming decades. The specifics evolve—the ambition doesn't.
The Role
The way software gets built is undergoing a fundamental shift. AI can now write, test, debug, and ship production-grade systems across web, mobile, embedded, robotics, and infrastructure. The bottleneck is no longer typing code—it's knowing what to build, why, and how the pieces fit together.
We're hiring Systems Engineers: people who can navigate an entire development cycle—from problem definition to production deployment—by directing AI tools and reasoning from first principles. You won't specialize in one stack. You'll operate across all of them.
This role replaces traditional dev teams. You'll work largely autonomously, shipping complete systems that previously required 3-5 specialists.
What You'll Do
- Own entire products and systems end-to-end: architecture, implementation, deployment, iteration
- Work across domains as needed—backend services, frontend interfaces, mobile apps, data pipelines, DevOps, embedded software, robotic systems
- Use AI tools to write, review, test, and debug code at high velocity
- Identify when AI output is wrong, incomplete, or subtly broken—and know how to fix it or when to escalate
- Make architectural decisions: database selection, protocol choices, system boundaries, performance tradeoffs
- Collaborate directly with designers, domain experts, and leadership
- Ship. Constantly.
What You Bring
First-principles thinking
You understand how systems work at a foundational level. When something breaks, you reason backward from the error to potential causes. You know the difference between a network timeout, a malformed query, a race condition, and a misconfigured environment—even if you haven't memorized the fix.
Broad technical fluency
You don't need to be an expert in everything. But you need working knowledge across:
- How web systems work: HTTP, DNS, TLS, REST, WebSockets, authentication flows
- How databases work: relational vs document vs key-value, indexing, query structure, transactions
- How infrastructure works: containers, orchestration, CI/CD, cloud primitives, networking basics
- How frontend works: rendering, state management, browser APIs, responsive design
- How mobile works: native vs cross-platform tradeoffs, app lifecycle, permissions
- How embedded/robotics software works: real-time constraints, sensor integration, communication protocols
You should be able to read code in any mainstream language and understand what it's doing.
AI-native workflow
You've already built real things using AI tools. You know how to prompt effectively, how to structure problems so AI can help, how to validate AI output, and when to step in manually.
High agency
You don't wait for permission or detailed specs. You figure out what needs to happen and make it happen. Ambiguity doesn't paralyze you.
Proof of work
Show us what you've built. Live products, GitHub repos, side projects, internal tools—anything that demonstrates you can ship complete systems.
What We Don't Care About
- Degrees or formal credentials
- Years of experience in a specific language or framework
- Whether you came from a "traditional" engineering path
What You'll Get
- Direct line to the CEO
- Autonomy to own large problem spaces
- A front-row seat to how engineering work is evolving
- Colleagues who ship fast and think clearly
📍 Position: IT Intern (Only candidates from BTech-IT background will be considered)
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
Strong Hands-On AWS Cloud Engineering / DevOps Profile
Mandatory (Experience 1): Must have 5+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
Job Title- React Developer
Job location- Pune/Remote
Availability- Immediate Joiners
Job Type- Fulltime
Experience Range- 5-6yrs
Desired skills - React, Redux/Flux , AWS, Javascript, Typescript
Ready and immediately available candidates will be preferred.
As a Frontend React.js Developer, you need to have an eye for building awesome and world class experiences using the best possible libraries in React. We're looking for someone who has:
- Proficiency in modern web frameworks and libraries (e.g. React, Redux, SASS, etc.)
- Solid experience with fundamental web technologies including HTML, CSS and JavaScript
- The candidate is expected to know technical details in React such as state management using Redux, props handling, components and inter-component communication
- He/She should also be hands-on and very good at JavaScript, logical thinking and a problem solver
- Ability to set up reusable, testable and performant UI components, allowing for rapid development and well-organized code
- Comfortable mentoring junior developers, reviewing code and guiding overall frontend architecture
- Experience implementing rich, data driven user experiences
Skills and Experience:
- Good academics
- Strong teamwork and communications
- Advanced troubleshooting skills
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using hashtag
hashtag
#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting hashtag
hashtag
#AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
JOB DESCRIPTION:
Location: bangalore
Mode of Work : 3 days from Office
DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API
- Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
- Implement and integrate APIs using Spring Framework and Apache CXF.
- Build microservices-based architecture for scalable and distributed systems.
- Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
- Optimize performance through efficient multithreading, memory management, and algorithm design.
- Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
- Work with RDBMS (preferably Sybase) for backend data integration.
- Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
- Work in Unix/Linux environments for deployment and troubleshooting.
- Angular and AWS must skill
The Opportunity
Planview is looking for a passionate Sr Data Scientist to join our team tasked with developing innovative tools for connected work. You are an experienced expert in supporting enterprise
applications using Data Analytics, Machine Learning, and Generative AI.
You will use this experience to lead other data scientists and data engineers. You will also effectively engage with product teams to specify, validate, prototype, scale, and deploy features with a consistent customer experience across the Planview product suite.
Responsibilities (What you'll do)
- Enable Data Science features within Planview applications by working in a fast-paced start-up mindset.
- Collaborate closely with product management to enable Data Science features that deliver significant value to customers, ensuring that these features are optimized for operational efficiency.
- Manage every stage of the AI/ML development lifecycle, from initial concept through deployment in a production environment.
- Provide leadership to other Data Scientists by exemplifying exceptional quality in work, nurturing a culture of continuous learning, and offering daily guidance in their research endeavors.
- Effectively communicate ideas drawn from complex data with clarity and insight.
Qualifications (What you'll bring)
- Master’s in operations research, Statistics, Computer Science, Data Science, or related field.
- 8+ years of experience as a data scientist, data engineer, or ML engineer.
- Demonstrable history for bringing Data Science features to Enterprise applications.
- Exceptional Python and SQL coding skills.
- Experience with Optimization, Machine Learning, Generative AI, NLP, Statistics, and Simulation.
- Experience with AWS Data and ML Technologies (Sagemaker, Glue, Athena, Redshift)
Preferred qualifications:
- Experience working with datasets in the domains of project management, software development, and resource planning.
- Experience with common libraries and frameworks in data science (Scikit Learn, TensorFlow, PyTorch).
- Experience with ML platform tools (AWS SageMaker).
- Skilled at working as part of a global, diverse workforce of high-performing individuals.
- AWS Certification is a plus
We are seeking a highly skilled Staff / Senior Staff Full Stack Engineer (Architect) to join our product engineering team. This is a hands-on, high-impact technical leadership role requiring deep expertise across frontend (React), backend (Node.js), cloud infrastructure, and databases.
You will work closely with Engineering, Product, UX, and cross-functional stakeholders to design and deliver scalable, secure, and high-performance systems, while driving engineering best practices and mentoring senior engineers.
Key Responsibilities:
Architecture & Technical Ownership
- Architect and design complex, scalable full-stack systems across multiple services and teams
- Translate business and product requirements into robust technical solutions
- Drive system design, scalability, performance, and security decisions
Hands-On Development
- Write clean, maintainable, production-grade code in:
- React + TypeScript (frontend)
- Node.js + TypeScript (backend)
- Build and maintain REST & GraphQL APIs
- Develop reusable frontend and backend components
Scalability, Performance & Security
- Optimize applications for speed, scalability, reliability, and cost
- Implement security best practices, authentication/authorization, and data protection
- Ensure compliance with security and regulatory standards (OWASP, GDPR, CCPA, etc.)
Collaboration & Leadership
- Partner closely with Product, UX, QA, DevOps, and Marketing
- Lead architecture discussions, design reviews, and code reviews
- Mentor senior engineers and lead by influence (not hierarchy)
- Promote engineering excellence through TDD, CI/CD, observability, and automation
Documentation & Communication
- Clearly document architecture, system flows, and design decisions
- Communicate complex technical concepts to non-technical stakeholders
- Contribute to long-term technology strategy and roadmap
Required Experience & Qualifications
Education & Experience
- Bachelor’s or Master’s degree in Computer Science or related field
- 10+ years of overall software engineering experience
- 7+ years of hands-on full-stack development experience
- Proven delivery of large-scale, complex systems
Core Technical Skills
Frontend
- Expert in React (architecture, performance, state management)
- Strong TypeScript
- Deep knowledge of HTML5, CSS3, responsive & adaptive design
- Experience with Redux / Context API, CSS-in-JS or Tailwind
- Familiarity with build tools: Webpack, Babel, npm/yarn
- Frontend testing: Jest, Vitest, Cypress, Storybook
Backend
- Strong hands-on experience with Node.js
- Frameworks: Express, NestJS, Koa
- API design: REST & GraphQL
- Serverless experience (AWS Lambda / Cloud Functions)
Databases & Caching
- SQL: PostgreSQL / MySQL
- NoSQL: MongoDB, Redis
- Database schema design, indexing, and performance tuning
- Caching & search: Redis, Elasticsearch
Cloud, Infra & DevOps
- Strong experience with AWS / GCP / Azure
- Containers: Docker, Kubernetes
- CI/CD: GitHub Actions, GitLab CI, Jenkins
- CDN, infrastructure scaling, and observability
- Git expertise (GitHub / GitLab / Bitbucket)
Security & Systems
- Web security best practices (OWASP)
- Authentication & authorization (OAuth, JWT)
- Experience with high-availability, fault-tolerant systems
Leadership & Ways of Working
- Strong track record of technical leadership and delivery
- Experience mentoring senior and staff-level engineers
- Ability to conduct high-quality code and design reviews
- Comfortable working in Agile (Scrum/Kanban) environments
- Excellent verbal and written communication
- Strong analytical and problem-solving skills
- Ability to learn and adapt quickly to new technologies
Perks & Benefits
- Day off on the 3rd Friday of every month (monthly long weekend)
- Monthly Wellness Reimbursement Program
- Monthly Office Commutation Reimbursement
- Paid paternity & maternity leave
Software Engineer
Company Introduction
Since 1983 company has been an innovative software business, that has continued to grow year on year due its continued
success. With ongoing market developments and investment within the company, exciting times are ahead of the team!
The Role-
We are looking for a full stack engineer to join our team. We ideally hire engineers who are comfortable across the full stack, but
we know you will have a preference about being on the front-end or back-end. As long as you're happy to work on both sets of
tasks – you should carry on reading!
Our Technology
• Front-end: JavaScript, Angular (or good understanding of React, Vue JS, Knockout JS or similar)
• Back-end: C#, ASP.NET, Web API, MVC, Entity Framework
• Database: SQL Server. Knowledge of non-SQL databases is a plus
• Cloud: Microsoft Azure, AWS
Responsibilities-
• Design of the overall architecture of the web application
• Implementation of a robust set of services and APIs to power the web application
• Building reusable code and libraries for future use
• Optimization of the application for maximum speed and scalability
• Implementation of security and data protection
• Translation of UI/UX wireframesto visual elements
• Integration of the front-end and back-end aspects of the web application
Additiontionalresponsibilitiesfor Project Lead
• Active participation in design\build cycle of the software engineering life cycle (prototyping, architecture, detailed design,
development, testing and deployment).
• Providing expertise in technical analysis and solving technical issues during project delivery.
• Code reviews, test case reviews and ensure code developed meets the requirements.
• Collaborate with product management and engineering to define and implement innovative solutions for the product
direction, visuals and experience.
• Requirement gathering and understanding, analyze and convert functional requirements into concrete technical tasks and able
to provide reasonable effort estimates
• Mentor and develop skills of junior software engineers in the team.
Tech Skills and Qualifications
• Contract Length : 1 year
• Software Engineering Degree with 3-5 years of experience.
• Expert knowledge of JavaScript and Node.js, good understanding of Angular and JavaScript testing frameworks (such as Jest,
Mocha etc.)
• Good understanding of Cloud Native architecture, containerisation, Docker, Microsoft Azure/AWS, CI/CD, and DevOps
culture.
• Knowledge of cloud-based SaaS applications/architecture.
• Practical experience in the use of leading engineering practices and principles.
• Practical experience of building robust solutions at large scale.
• Appreciation for functions of Product and Design, experience working in cross-functional teams.
• Understanding differences between multiple delivery platforms (such as mobile vs. desktop), and optimizing output to
match the specific platform.
We are seeking a highly skilled and experienced Python Developer with a strong background in fintech to join our dynamic team. The ideal candidate will have at least 7+ years of professional experience in Python development, with a proven track record of delivering high-quality software solutions in the fintech industry.
Responsibilities:
Design, build, and maintain RESTful APIs using Django and Django Rest Framework.
Integrate AI/ML models into existing applications to enhance functionality and provide data-driven insights.
Collaborate with cross-functional teams, including product managers, designers, and other developers, to define and implement new features and functionalities.
Manage deployment processes, ensuring smooth and efficient delivery of applications.
Implement and maintain payment gateway solutions to facilitate secure transactions.
Conduct code reviews, provide constructive feedback, and mentor junior members of the development team.
Stay up-to-date with emerging technologies and industry trends, and evaluate their potential impact on our products and services.
Maintain clear and comprehensive documentation for all development processes and integrations.
Requirements:
Proficiency in Python and Django/Django Rest Framework.
Experience with REST API development and integration.
Knowledge of AI/ML concepts and practical experience integrating AI/ML models.
Hands-on experience with deployment tools and processes.
Familiarity with payment gateway integration and management.
Strong understanding of database systems (SQL, PostgreSQL, MySQL).
Experience with version control systems (Git).
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Job Types: Full-time, Permanent
Work Location: In person
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 4 - 10 years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Experience : 4 - 10years
Notice : Immediate to 15days
Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
- Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
- Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
- Implement data governance and security best practices to ensure compliance and data integrity.
- Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
- Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 4-10 years of prior experience in data engineering, with a focus on designing and building data pipelines.
- Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
- Strong programming skills in languages such as Python, Java, or Scala.
- Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
- Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.
About JobTwine
JobTwine is an AI-powered platform offering Interview as a Service, helping companies hire 50% faster while doubling the quality of hire. AI Interviews, Human Decisions, Zero Compromises We leverage AI with human expertise to discover, assess, and hire top talent. JobTwine automates scheduling, uses an AI Copilot to guide human interviewers for consistency and generates structured, high-quality automated feedback.
Role Overview
We are looking for a Senior DevOps Engineer with 4–5 years of experience, a product-based mindset, and the ability to thrive in a startup environment
Key Skills & Requirements
- 4–5 years of hands-on DevOps experience
- Experience in product-based companies and startups
- Strong expertise in CI/CD pipelines
- Hands-on experience with AWS / GCP / Azure
- Experience with Docker & Kubernetes
- Strong knowledge of Linux and Shell scripting
- Infrastructure as Code: Terraform / CloudFormation
- Monitoring & logging: Prometheus, Grafana, ELK stack
- Experience in scalability, reliability and automation
What You Will Do
- Work closely with Sandip, CTO of JobTwine on Gen AI DevOps initiatives
- Build, optimize, and scale infrastructure supporting AI-driven products
- Ensure high availability, security and performance of production systems
- Collaborate with engineering teams to improve deployment and release processes
Why Join JobTwine ?
- Direct exposure to leadership and real product decision-making
- Steep learning curve with high ownership and accountability
- Opportunity to build and scale a core B2B SaaS produc
Job Title: Software Development Engineer – III (SDE-III)
Location: Sector 55, Gurugram (Onsite)
Work Timings: Regular day shift, 5 days working
About Master-O
Master-O is a next-generation sales enablement and microskill learning platform designed to empower frontline sales teams through gamification, AI-driven coaching, and just-in-time learning. We work closely with large enterprises to improve sales readiness, productivity, and on-ground performance at scale.
As we continue to build intelligent, scalable, and enterprise-ready products, we are looking for a seasoned SDE-III who can take ownership of complex modules, mentor engineers, and contribute to architectural decisions.
Role Overview
As an SDE-III at Master-O, you will play a critical role in designing, building, and scaling core product features used by large enterprises with high user volumes. You will work closely with Product, Design, and Customer Success teams to deliver robust, high-performance solutions while ensuring best engineering practices.
This is a hands-on role requiring strong technical depth, system thinking, and the ability to work in a fast-paced B2B SaaS environment.
Required Skills & Experience
- 4–5 years of full-time professional experience in software development
- Strong hands-on experience with:
- React.js
- Node.js & Express.js
- JavaScript
- MySQL
- AWS
- Prior experience working in B2B SaaS companies (preferred)
- Experience handling enterprise-level applications with high concurrent users
- Solid understanding of REST APIs, authentication, authorization, and backend architecture
- Strong problem-solving skills and ability to write clean, maintainable, and testable code
- Comfortable working in an onsite, collaborative team environment
Good to Have
- Experience working with or integrating LLMs, AI assistants, or Agentic AI systems
- Experience with cloud platforms and deployment workflows
- Prior experience in EdTech, Sales Enablement, or Enterprise Productivity tools
Why Join Master-O?
- Opportunity to build AI-first, enterprise-grade products from the ground up
- High ownership role with real impact on product direction and architecture
- Work on meaningful problems at the intersection of sales, learning, and AI
- Collaborative culture with fast decision-making and minimal bureaucracy
- Be part of a growing product company shaping the future of sales readiness
Job Description
Key Responsibilities
- API & Service Development:
- Build RESTful and GraphQL APIs for e-commerce, order management, inventory, pricing, and promotions.
- Database Management:
- Design efficient schemas and optimize performance across SQL and NoSQL data stores.
- Integration Development:
- Implement and maintain integrations with ERP (SAP B1, ERPNext), CRM, logistics, and third-party systems.
- System Performance & Reliability:
- Write scalable, secure, and high-performance code to support real-time retail operations.
- Collaboration:
- Work closely with frontend, DevOps, and product teams to ship new features end-to-end.
- Testing & Deployment:
- Contribute to CI/CD pipelines, automated testing, and observability improvements.
- Continuous Improvement:
- Participate in architecture discussions and propose improvements to scalability and code quality.
Requirements
Required Skills & Experience
- 3–5 years of hands-on backend development experience in Node.js, Python, or Java.
- Strong understanding of microservices, REST APIs, and event-driven architectures.
- Experience with databases such as MySQL/PostgreSQL (SQL) and MongoDB/Redis (NoSQL).
- Hands-on experience with AWS / GCP and containerization (Docker, Kubernetes).
- Familiarity with Git, CI/CD, and code review workflows.
- Good understanding of API security, data protection, and authentication frameworks.
- Strong problem-solving skills and attention to detail.
Nice to Have
- Experience in e-commerce or omnichannel retail platforms.
- Exposure to ERP / OMS / WMS integrations.
- Familiarity with GraphQL, Serverless, or Kafka / RabbitMQ.
- Understanding of multi-brand or multi-country architecture challenges.

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering
Industry: Global digital transformation solutions provider
Work Mode: Hybrid
Salary: Best in Industry
Experience: 6-8 years
Location: Hyderabad
Job Description:
• Experience in Microsoft Web development technologies such as Web API, SOAP XML
• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure
• Knowledge of cloud architecture and technologies
• Support/Incident management experience in a 24/7 environment
• SQL Server and SSIS experience
• DevOps experience of Github and Jenkins CI/CD pipelines or similar
• Windows Server 2016/2019+ and SQL Server 2019+ experience
• Experience of the full software development lifecycle
• You will write clean, scalable code, with a view towards design patterns and security best practices
• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge
Must-Haves
C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)
.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)
Mandatory skills: Net core with Azure or AWS experience
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Drive - 17th Jan
Review Criteria
- Strong Data / ETL Test Engineer
- 5+ years of overall experience in Testing/QA
- 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
- Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
- Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
- Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
- Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
- Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
- Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
- Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)
Job Specific Criteria
- CV Attachment is mandatory
- Do you have experience working on Government projects/companies, mention brief about project?
- Do you have experience working on enterprise projects/companies, mention brief about project?
- Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
- Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
- Do you have exposure to BI tools such as Qlik?
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
Ideal Candidate
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
About the Role
We're seeking a skilled Java Developer with strong AWS cloud experience to join our
solution architecture team. You'll be building scalable backend systems, integrating diverse
enterprise platforms, and developing cloud-native solutions for clients across government,
agriculture, and manufacturing sectors.
Key Responsibilities
Application Development
• Design and develop robust Java-based backend services and APIs for enterprise
applications
• Build microservices architectures for cloud-native deployments on AWS
• Implement RESTful APIs and SOAP web services for enterprise integration
• Develop serverless applications using AWS Lambda and event-driven architectures
• Create data processing pipelines using AWS services
AWS Cloud Development
• Deploy and manage applications on AWS infrastructure (EC2, ECS, EKS)
• Implement serverless solutions using Lambda, API Gateway, and Step Functions
• Design and implement storage solutions using S3, EBS, and EFS
• Work with AWS databases (RDS, Aurora, DynamoDB)
• Implement messaging and queuing using SQS, SNS, and EventBridge
• Configure and manage application monitoring using CloudWatch
System Integration
• Design and implement integration solutions connecting disparate enterprise systems
• Build adapters and connectors for third-party APIs, legacy systems, and SaaS
platforms
• Implement enterprise integration patterns (message routing, transformation,
orchestration)
• Develop middleware solutions using AWS integration services
• Handle data synchronization, format transformations (XML, JSON, CSV), and
protocol conversions
Database & Performance
• Design and optimize database schemas for AWS RDS (PostgreSQL, MySQL) and
Aurora
• Write efficient SQL queries, stored procedures, and optimize database performance
• Implement caching strategies using AWS ElastiCache (Redis/Memcached)
• Configure database connection pooling and manage high-availability setups
• Troubleshoot database bottlenecks and resolve concurrency issues
Security & DevOps
• Implement AWS security best practices (IAM, Security Groups, KMS)
• Build CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy
• Configure application auto-scaling and load balancing
Quality & Best Practices
• Write clean, maintainable code following SOLID principles and design patterns
• Implement comprehensive unit and integration testing
• Participate in code reviews and technical design discussions
• Document technical specifications, API contracts, and AWS architecture diagrams
Required Skills & Experience
Core Java Expertise
• 3-5 years of hands-on Java development experience (Java 11+ preferred)
• Strong understanding of OOP concepts, data structures, and algorithms
• Experience with Spring Framework (Spring Boot, Spring MVC, Spring Data JPA)
• Proficiency in building RESTful and SOAP web services
AWS Cloud Experience (Must Have)
• 2+ years of hands-on AWS experience with production deployments
• Strong knowledge of core AWS services: EC2, S3, RDS, Lambda, API Gateway
• Experience with AWS networking: VPC, subnets, security groups, load balancers
• Understanding of AWS IAM, security best practices, and compliance
• Experience with AWS monitoring and logging (CloudWatch, X-Ray)
• Knowledge of AWS messaging services (SQS, SNS, EventBridge)
• Familiarity with AWS database services (RDS, Aurora, DynamoDB)
Integration Experience
• Experience with enterprise integration patterns and middleware solutions
• Knowledge of API design, development, and management
• Understanding of authentication/authorization mechanisms (OAuth2, JWT, SAML)
• Experience with data transformation and mapping frameworks
• Familiarity with integration protocols (HTTP/S, FTP/SFTP, SMTP, JMS)
Database Skills
• Strong SQL skills with relational databases (PostgreSQL, MySQL preferred)
• Experience with JPA/Hibernate ORM frameworks
• Understanding of database connection pooling and transaction management
• Knowledge of database migrations and versioning tools (Flyway/Liquibase)
Technical Stack
• Build tools: Maven or Gradle
• Version control: Git
• API documentation: Swagger/OpenAPI
• Testing frameworks: JUnit, Mockito, TestNG
• Containerization: Docker (experience with ECS/EKS is a plus)
Must have
• Proven usage of Agentic AI Tools in SDLC
Bonus Skills
• AWS Certifications (Developer)
• Experience with message brokers (Kafka, RabbitMQ, ActiveMQ, Amazon MSK)
• Kubernetes and container orchestration (EKS)
• AWS serverless application development (SAM, Serverless Framework)
• Experience with Apache Camel, MuleSoft, or other integration platforms
• CI/CD tools (Jenkins, GitLab CI, GitHub Actions, AWS CodePipeline)
• Knowledge React/Angular technologies for full-stack collaboration
• Experience with ERP/CRM systems integration
Desired Attributes
• Strong problem-solving and analytical thinking abilities
• Ability to design cloud-native architectures following AWS Well-Architected
Framework
• Good communication skills for interacting with clients and cross-functional teams
• Self-motivated with ability to work independently and in team environments
• Attention to detail and commitment to delivering quality solutions
• Cost-conscious approach to AWS resource utilization
Education
• Bachelor's or Master's degree in Computer Science, Information Technology, or
related field
• Equivalent practical experience will be considered
• AWS certifications are highly valued
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.
We are looking for Senior Software Engineers responsible for designing, developing, and maintaining large scale distributed ad technology systems. This would entail working on several different systems, platforms and technologies.Collaborate with various engineering teams to meet a range of technological challenges. You will work with our product team to contribute and influence the roadmap of our products and technologies and also influence and inspire team members.
Experience
- 3 - 10 Years
Required Skills
- 3+ years of work experience and a degree in computer science or a similar field
- Knowledgeable about computer science fundamentals including data structures, algorithms, and coding
- Enjoy owning projects from creation to completion and wearing multiple hats
- Product focused mindset
- Experience building distributed systems capable of handling large volumes of traffic
- Fluency with Java, Vertex, Redis, Relational Databases
- Possess good communication skills
- Enjoy working in a team-oriented environment that values excellence
- Have a knack for solving very challenging problems
- (Preferred) Previous experience in advertising technology or gaming apps
- (Preferred) Hands-on experience with Spark, Kafka or similar open-source software
Responsibilities
- Creating design and architecture documents
- Conducting code reviews
- Collaborate with others in the engineering teams to meet a range of technological challenges
- Build, Design and Develop large scale advertising technology system capable of handling tens of billions of events daily
Education
- UG - B.Tech/B.E. - Computers; PG - M.Tech - Computer
What We Offer:
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and inclusive work environment.
Salary budget upto 50 LPA or hike20% on current ctc
you can text me over linkedin for quick response
About Borderless Access
Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.
We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.
Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.
The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.
Key Responsibilities
- Lead, mentor, and grow a cross-functional team of engineers specializing.
- Foster a culture of collaboration, accountability, and continuous learning.
- Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
- Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
- Promote clean, maintainable, and well-documented code across the team.
- Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
- Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
- Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
- Ensure timely delivery of high-quality software aligned with business goals.
- Work closely with DevOps to ensure platform reliability, scalability, and observability.
- Conduct regular 1:1s, performance reviews, and career development planning.
- Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
- Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.
Added Responsibilities
- Defining and adhering to the development process.
- Taking part in regular external audits and maintaining artifacts.
- Identify opportunities for automation to reduce repetitive tasks.
- Mentor and coach team members in the teams.
- Continuously optimize application performance and scalability.
- Collaborate with the Marketing team to understand different user journeys.
Growth and Development
The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:
- Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
- Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
- Drive business objectives – Become part of defining and taking actions to meet the business objectives.
About You
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in software development.
- Experience with microservices architecture and container orchestration.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Solid understanding of data structures, algorithms, and software design patterns.
- Solid understanding of enterprise system architecture patterns.
- Experience in managing a small to medium-sized team with varied experiences.
- Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
- Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
- Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
- Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
- Knowledge of containerization technologies Docker and Kubernetes.
Hiring for Data Engineer
Exp : 4 - 6 yrs
Edu : BE/B.Tech
Work Location : Noida WFO
Notice Period : Immediate
Skilla : Pyspark , SQL , AWS/GCP , Hadoop
Key Responsibilities
- Administer and optimize PostgreSQL databases on AWS RDS
- Monitor database performance, health, and alerts using CloudWatch
- Manage backups, restores, upgrades, and high availability
- Support CDC pipelines using Debezium with Kafka & Zookeeper
- Troubleshoot database and replication/streaming issues
- Ensure database security, access control, and compliance
- Work with developers and DevOps teams for production support
- Use tools like DBeaver for database management and analysis
Required Skills
- Strong experience with PostgreSQL DBA activities
- Hands-on experience with AWS RDS
- Knowledge of Debezium, Kafka, and Zookeeper
- Monitoring using AWS CloudWatch
- Proficiency in DBeaver and SQL
- Experience supporting production environments
Role & Responsibilities:
As a Full Stack Developer Intern, you will take on significant responsibilities in the design, development, and maintenance of web applications using Next.js, React.js, Node.js, PostgreSQL, and AWS Cloud services. We seek individuals who are self-motivated, energetic, and capable of delivering high-quality work with minimal supervision.
- Develop user-friendly web applications using Next.js and React.js.
- Create and implement RESTful APIs using Node.js.
- Write high-quality, maintainable code while adhering to best practices in software development.
- Deliver projects on time while maintaining a strong focus on performance and user experience.
- Manage data effectively using PostgreSQL databases.
- Code Quality & Reviews: Maintain code quality standards and conduct regular code reviews to ensure the delivery of high-quality, error-free code.
- Performance Optimization: Identify and troubleshoot performance bottlenecks to ensure a seamless and lightning-fast platform experience.
- Bug Fixing & Maintenance: Monitor platform performance and proactively address any issues or bugs to keep the platform running flawlessly.
- Contribute innovative ideas and solutions during team discussions and brainstorming sessions.
- Communicate openly and honestly with team members, sharing insights and feedback constructively.
- Stay updated on emerging technologies and demonstrate a willingness to learn more.
Qualification:
- Graduate/Post-Graduate with a degree in Computer Science, Software Engineering, or a related field.
- Proficiency in HTML, CSS, JavaScript, and modern front-end frameworks (specifically Next.js and React.js).
- Strong knowledge of back-end technologies such as Node.js and Express.js.
- Experience with relational databases, particularly PostgreSQL.
- Familiarity with AWS Cloud services is a plus.
- Excellent problem-solving skills with a proactive approach to challenges.
- Proven ability to troubleshoot and resolve complex technical issues.
- Strong communication skills with the confidence to share ideas openly.
- High energy level and passion for contributing to the company’s success with integrity and honesty.
- Startup Enthusiast: Embrace the fast-paced and dynamic environment of a startup, driven by a passion for making a positive impact.
Drive the design, automation, and reliability of Albert Invent’s core platform to support scalable, high-performance AI applications.
You will partner closely with Product Engineering and SRE teams to ensure security, resiliency, and developer productivity while owning end-to-end service operability.
Key Responsibilities
- Own the design, reliability, and operability of Albert’s mission-critical platform.
- Work closely with Product Engineering and SRE to build scalable, secure, and high-performance services.
- Plan and deliver core platform capabilities that improve developer velocity, system resilience, and scalability.
- Maintain a deep understanding of microservices topology, dependencies, and behavior.
- Act as the technical authority for performance, reliability, and availability across services.
- Drive automation and orchestration across infrastructure and operations.
- Serve as the final escalation point for complex or undocumented production issues.
- Lead root-cause analysis, mitigation strategies, and long-term system improvements.
- Mentor engineers in building robust, automated, and production-grade systems.
- Champion best practices in SRE, reliability, and platform engineering.
Must-Have Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
- 4+ years of strong backend coding in Python or Node.js.
- 4+ years of overall software engineering experience, including 2+ years in an SRE / automation-focused role.
- Strong hands-on experience with Infrastructure as Code (Terraform preferred).
- Deep experience with AWS cloud infrastructure and distributed systems (microservices, APIs, service-to-service communication).
- Experience with observability systems – logs, metrics, and tracing.
- Experience using CI/CD pipelines (e.g., CircleCI).
- Performance testing experience using K6 or similar tools.
- Strong focus on automation, standards, and operational excellence.
- Experience building low-latency APIs (< 200ms response time).
- Ability to work in fast-paced, high-ownership environments.
- Proven ability to lead technically, mentor engineers, and influence engineering quality.
Good-to-Have Skills
- Kubernetes and container orchestration experience.
- Observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
- Experience building Internal Developer Platforms (IDPs) or reusable engineering frameworks.
- Exposure to ML infrastructure or data engineering pipelines.
- Experience working in compliance-driven environments (SOC2, HIPAA, etc.).
Hey there budding tech wizard! Are you ready to take on a new challenge?
As a Senior Software Developer 1 (Full Stack) at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff, mentoring and guiding others. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.
Responsibilities
We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:
- Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
- Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
- Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
- Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
- Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
- Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
- Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
- Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
- Participate in release planning and design complex modules & features.
- Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
- Feel empowered by managing deployments and assisting in infra management.
- Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.
Qualifications
We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:
- You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
- You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
- You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
- You’ve 5+ years of experience with backend web framework Django and DRF.
- You’ve 5+ years of experience with frontend web framework React.
- Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
- You have experience with testing, code, and design reviews.
- You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
- You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
- You've demonstrated your ability to lead a small team of developers.
- And most important, you're also excited to learn about new things and try out new ideas.
Compensation:
We know you're passionate and talented, and we want to reward you for that. That's why we're offering a compensation package of 15 - 17 LPA!
This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!
We are located in Delhi. This post may require relocation.
The ROLE:
The Enterprise Architect plays a pivotal role in helping organizations leverage cloud technologies to meet their business goals.
Develop Enterprise Cloud strategies, lead delivery teams in customer workshop, coordinate with multiple stakeholders in client hierarchy to engage, understand and articulate requirements to create solution. Has expertise in one or more Public Cloud provider technologies and is capable of embedding these capabilities into existing or new cloud-based infrastructure and platform solutions.
As an enterprise architect, you will engage with clients to define and design the cloud technology architecture and solution architecture. You will also define application and IT Architecture focusing on the mapping of Business requirement’s to IT Capabilities and align the same.
The focus is to define the integrations between, Applications, Data and Technology in the Enterprise and the Transitional Process necessary for migration and modernizing to cloud based architecture that includes APIs, microservices and containers.
You will keep yourself updated with latest technology trends and architectural styles and suggests leveraging of modern digital technologies to cater to business needs. Recognized as an Expert In their technology and architecture field. You will guide cloud objectives and technologies.
Mandatory requirement: TOGAF certified and must have current Insurance domain experience.
Key Responsibilities :
- Define end to end Enterprise and Cloud based Architecture catering to client needs and requirements
- Analyze Complex application landscapes, anticipate potential problems and future trends, assess potential solutions, Impacts, and risks to propose cloud roadmap, solution architecture and associated TCO
- Develop and implement cloud architecture solutions based on AWS , Azure , GCP when assigned to work on delivery projects.
- Analyze client requirements, propose for overall Application modernization, migrations and green field implementations
- Experience in implementing and deploying a DevOps based, end to end cloud application would be a plus point
- Lead teams for a discovery and architecture workshop, influence client architects, and IT personnel
- Guide other architects working with you in the team. Adapt communications and approaches to conclude technical scope discussions with various Partners, resulting in Common Agreements.
- Deliver an optimized infrastructure services design leveraging public, private, and hybrid Cloud architectures and services
- Act as subject matter and implementation expert for the client as related to technical architecture and implementation of proposed solution using Cloud Services
- Implement Solution using Cloud Services, open source and other 3rd party technologies
- Create and develop process frameworks, documents, SOP’s, cook book, automation scripts/tools for Cloud Discovery, Migrations, Application modernization, DevSecOPs and Services transitions to cloud
- Experienced in developing Terraform/Powershell/Cloud Formation/ARM scripts/templates
- Lead internal and external initiative to standardize, automate and develop competitive assets and accelerators
Technical Experience :
- Knowledge on creating IaaS and PaaS cloud solutions that meet customer needs for scalability, reliability and performance
- Experienced in delivering large/medium scale enterprise solution using Microservices, APIs, Containerization, Kubernetes, App modernization, Cloud Iaas & PaaS
- Understanding of WAN, LAN, TCP/IP, VPN, Virtual Networking, Subnet, Routing, Storage account/management and Infrastructure as a Service
- Experience in creating systems logical view, network diagram and able to build automated solution for server build, resource monitoring
- Knowledge and experience in delivering solution using middle ware integration technologies using SOA, ESB, API platforms, API Security
- Experience in design, development and architecture of Java/JEE/.NET based system
- Have ability to assess multiple types of operating systems (AIX, Linux, Windows etc) and database (Postgres, Sybase, MySQL, MS SQL, NoSQL, DB2, Oracle, Cassandra and MongoDB) to develop transformation approaches and solutions
Professional Attributes :
- Strategic thinker, have ability grasp new technologies, innovate, develop and nurture new solutions
- Experience in driving consulting workshops, creating content of workshops in short deadlines to C-levels
- Demonstrate thought leadership, create good impacts in engagements, manage time and demonstrate flexibility by adapting to situations
- An ability to follow processes
- Strong documentation skill
- Good communication skills both written and verbal with the client
Certifications
- TOGAF, ITIL, Azure Solution Architect Associate & Expert/AWS Solution Architect Associate & Professional /GCP Professional
JOB DETAILS:
- Job Title: Senior Devops Engineer 2
- Industry: Ride-hailing
- Experience: 5-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
3. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
4. Candidate must have experience in database migration from scratch
5. Must have a firm hold on the container orchestration tool Kubernetes
6. Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
7. Understanding programming languages like GO/Python, and Java
8. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
9. Working experience on Cloud platform - AWS
10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Lead DevOps Engineer
- Industry: Ride-hailing
- Experience: 6-9 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.
2. Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
3. Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)
4. Own end-to-end infrastructure right from non-prod to prod environment including self-managed
5. Candidate must have Self experience in database migration from scratch
6. Must have a firm hold on the container orchestration tool Kubernetes
7. Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
8. Understanding programming languages like GO/Python, and Java
9. Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
10. Working experience on Cloud platform -AWS
11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs
● Codify our infrastructure
● Do what it takes to keep the uptime above 99.99%
● Understand the bigger picture and sail through the ambiguities
● Scale technology considering cost and observability and manage end-to-end processes
● Understand DevOps philosophy and evangelize the principles across the organization
● Strong communication and collaboration skills to break down the silos
Job Requirements:
● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience
● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant
● Must have a firm hold on the container orchestration tool Kubernetes
● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet
● Strong problem-solving skills, and ability to write scripts using any scripting language
● Understanding programming languages like GO/Python, and Java
● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
What’s there for you?
Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as
● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node
● More than 100,000 Request per second on our edge gateways
● ~20,000 events per second on self-managed Kafka
● 100s of TB of data on self-managed databases
● 100s of real-time continuous deployment to production
● Self-managed infra supporting
● 100% OSS
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
Job Description – Senior Software Developer
Key Responsibilities:
- Architect, develop and maintain web applications using the MERN stack.
- Design RESTful APIs (and possibly GraphQL) in Node.js/Express, integrate with front-end in React.
- Build and enhance the front-end UI in React, ensuring performance, responsiveness and maintainability.
- Design MongoDB schemas, indexes and queries for high-traffic/scale scenarios.
- Deploy, operate and optimise cloud infrastructure on AWS: e.g., EC2, Lambda, S3, RDS/DynamoDB, VPCs, IAM, autoscaling.
- Ensure high availability, fault-tolerance, and scalability of services in production.
- Set up and maintain CI/CD pipelines, infrastructure as code, automated testing, monitoring & alerting.
- Troubleshoot and fix performance bottlenecks across the stack (front-end, back-end, database, cloud).
- Collaborate with cross-functional teams (product, design, QA, DevOps) to deliver features end-to-end.
- Mentor junior/mid-level developers, conduct code-reviews, impart best practices.
- Stay up to date with emerging technologies, propose improvements to architecture and processes.
Required Skills & Qualifications:
- 3-5 years of hands-on experience in full-stack web development using the MERN stack (MongoDB, Express.js, React.js, Node.js).
- Strong front-end skills in React: component architecture, hooks, state management, performance optimisation.
- Solid back-end skills in Node.js/Express: API design, middleware, security, robustness.
- Experience with MongoDB (or equivalent NoSQL) including schema design, query optimisation, indexing.
- Proven experience working with AWS cloud services (compute, storage, database, networking, security, monitoring).
- Experience deploying applications at scale: autoscaling, high availability, disaster recovery.
- Familiarity with CI/CD pipelines, infrastructure‐as‐code (CloudFormation, Terraform or similar), containerisation (Docker) is a plus.
- Good understanding of software engineering best practices: code-quality, testing, documentation, version control (Git).
- Excellent communication skills, self-motivation, ability to work remotely and collaborate across time zones.
- Bachelor’s degree in Computer Science or related field (or equivalent experience).
- Strong expertise in Java 8+, Spring Boot, REST APIs.
- Strong front-end experience with Angular 8+, TypeScript, HTML, CSS.
- Experience with SQL/NoSQL databases (MySQL, PostgreSQL, MongoDB, etc.).
- Hands-on with Git, Maven/Gradle, Jenkins, CI/CD.
- Knowledge of cloud platforms (AWS) is an added advantage.
- Experience with Agile/Scrum methodologies.
- Domain Expertise (ADDED): Proven experience working on Auto-Loan Management Systems (LMS), Vehicle Finance, or related banking/NBFC solutions.

Profitable E-comm/NBFC company close to becoming a Unicorn.
Want to build core backend-powered experiences such as Checkout and Credit for the 5th largest E-com portal in the country? Then read on..
About The company
Founded by serial Entrepreneurs from IIT Bombay, Snapmint is challenging the way banking is done by building the banking experience ground up. Our first product provides purchase financing at 0% interest rates to 300 Million consumers in India who do not have credit cards using instant credit scoring and advanced underwriting systems. We look at hundreds of variables, much beyond traditional credit models. With real time credit approval, seamless digital loan servicing and repayments technology we are revolutionizing the way banking is done for todays smartphone-wielding Indian.
Website: https://snapmint.com
LinkedIn: https://www.linkedin.com/company/snapmintfinserv/
Title: Senior Engineering Manager, Backend
Experience: 8-12 Years
Work Location: Gurgaon (Unitech Cyber Park, Sector 39)
Working Arrangement: 5 days (WFO)
Job Overview:
As Engineering Manager Backend, you will lead a team of backend engineers, driving the development of scalable, reliable, and performant systems. You will work closely with product management, front-end engineers, and other cross-functional teams to deliver high-quality solutions while ensuring alignment with the companys technical and business goals. You will play a key role in coaching and mentoring engineers, promoting best practices, and helping to grow the backend engineering capabilities.
Key Responsibilities:
- Design and build highly scalable, low-latency, fault-tolerant backend services handling high-volume financial transactions.
- Work hands-on with the team on core backend development, architecture, and production issues.
- Lead, mentor, and manage a team of backend engineers, ensuring high-quality delivery and fostering a collaborative work environment.
- Collaborate with product managers, engineers, and other stakeholders to define technical solutions and design scalable backend architectures.
- Own the development and maintenance of backend systems, APIs, and services.
- Drive technical initiatives, including infrastructure improvements, performance optimizations, and platform scalability.
- Guide the team in implementing industry best practices for code quality, security, and performance.
- Participate in code reviews, providing constructive feedback and maintaining high coding standards.
- Promote agile methodologies and ensure the team adheres to sprint timelines and goals.
- Develop and track key performance indicators (KPIs) to measure team productivity and system reliability.
- Foster a culture of continuous learning, experimentation, and improvement within the backend engineering team.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
- 8+ years of experience in backend development with a proven track record of leading engineering teams.
- Strong experience with backend language ie. Node.js/Golang
- Experience working with databases (SQL, NoSQL), caching systems, and RESTful APIs.
- Familiarity with cloud platforms like AWS, GCP, or Azure and containerization technologies (e.g., Docker, Kubernetes).
- Solid understanding of software development principles, version control, and CI/CD practices.
- Excellent problem-solving skills and the ability to architect complex systems.
- Strong leadership, communication, and interpersonal skills.
- Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively.
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Roles & Responsibilities
- Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
- Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
- Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
- Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
- Data Modeling: Develop and maintain data models with strong governance practices.
- Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
- Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
- Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.
Educational Qualifications
- Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
- Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
- Skills
- Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.
Good to Have
- Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).
Soft Skills
- Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
- Certifications
- AWS/GCP/Azure Data Engineer Certification.
Job Title: Technical Lead (Java/Spring Boot/Cloud)
Location: Bangalore
Experience: 8 to 12 Years
Overview
We are seeking a highly accomplished and charismatic Technical Lead to drive thedesign, development, and delivery of high-volume, scalable, and secure enterpriseapplications. The ideal candidate will possess deep expertise in the Java ecosystem, particularly with Spring Boot and Microservices Architecture, coupled with significant
experience in Cloud Solutions (AWS/Azure) and DevOps practices. This role requiresa proven leader capable of setting "big picture" strategy while mentoring a high-performing team.
Key Responsibilities
Architecture Design
- Lead the architecture and design of complex, scalable, and secure cloud-native applications using Java/J2EE and the Spring Boot Framework.
- Design and implement Microservices Architecture and RESTful/SOAP APIs.
- Spearhead Cloud Solution Architecture, including the design and optimization of cloud-based infrastructure deployment with auto-scaling, fault-tolerant, and reliability capabilities (AWS/Azure).
- Guide teams on applying Architecture Concepts, Architectural Styles, and Design Patterns (e.g., UML, Object-Oriented Analysis and Design).
- Solution Architect complex migrations of enterprise applications to Cloud.
- Conduct Proof-of-Concepts (PoC) for new technologies like Blockchain (Hyper Ledger) for solutions such as Identity Management.
Technical Leadership & Development
- Lead the entire software development process from conception to completion within an Agile/Waterfall and Cleanroom Engineering environment.
- Define and enforce best practices and coding standards for Java development, ensuring code quality, security, and performance optimization.
- Implement and manage CI/CD Pipelines &; DevOps Practices to automate software delivery.
- Oversee cloud migration and transformation programs for enterprise applications, focusing on reducing infrastructure costs and improving scalability.
- Troubleshoot and resolve complex technical issues related to the Java/Spring Boot stack, databases (SQL Server, Oracle, My-SQL, Postgres SQL, Elastic Search, Redis), and cloud components.
- Ensure the adoption of Test Driven Development (TDD), Unit Testing, and Mock Test-Driven Development practices.
People & Delivery Management
- Act as a Charismatic people leader and Transformative Force, building and mentoring high-performing teams from the ground up.
- Drive Delivery Management, collaborating with stakeholders to align technical solutions with business objectives and managing large-scale programs from initiation to delivery.
- Utilize Excellent Communication & Presentation Skills to articulate technical strategies to both technical and non-technical stakeholders.
- Champion organizational change, driving adoption of new processes, ways of working, and technology platforms.
Required Technical Skills
- Languages: Java (JDK1.5+), Spring Core Framework, Spring Batch, Java Server Pages (JSP), Servlets, Apache Struts, JSON, Hibernate.
- Cloud: Extensive experience with Amazon Web Services (AWS) (Solution Architect certification preferred) and familiarity with Azure.
- DevOps/Containerization: CI/CD Pipelines, Docker.
- Databases: Strong proficiency in MS SQL Server, Oracle, My-SQL, Postgres SQL, and NoSQL/Caching (Elastic Search, Redis).
Education and Certifications
- Master's or Bachelor's degree in a relevant field.
- Certified Amazon Web Services Solution Architect (or equivalent).
- Experience or certification in leadership is a plus.


























