50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Job Description – Full Stack Developer (React + Node.js)
Experience: 5–8 Years
Location: Pune
Work Mode: WFO
Employment Type: Full-time
About the Role
We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications using React and Node.js.
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
- Work with relational databases such as PostgreSQL or MySQL.
- Deploy and manage applications in cloud environments (preferably GCP or AWS).
- Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
- Utilize containerization tools like Docker for efficient development and deployment workflows.
- Integrate third-party services and APIs, including AI APIs and tools.
- Contribute to improving development processes, documentation, and best practices.
Required Skills
- Strong experience with React.js (frontend).
- Solid hands-on experience with Node.js (backend).
- Good understanding of relational databases: PostgreSQL / MySQL.
- Experience working in production environments and debugging live systems.
- Strong understanding of OOP or Functional Programming, and clean coding standards.
- Knowledge of Docker or other containerization tools.
- Experience with cloud platforms (GCP or AWS).
- Excellent written and verbal communication skills.
Good to Have
- Experience with Golang or Elixir.
- Familiarity with Kubernetes, RabbitMQ, Redis, etc.
- Contributions to open-source projects.
- Previous experience working with AI APIs or machine learning tools.
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
About the company:
Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.
About the Role:
As a Data Engineer, you will contribute to cutting-edge global projects and innovative product initiatives, delivering impactful solutions for our Fortune clients. In this role, you will take ownership of the entire data pipeline and infrastructure development lifecycle—from ideation and design to implementation and ongoing optimization. Your efforts will ensure the delivery of high-performance, scalable, and reliable data solutions. Join us to become a driving force in shaping the future of data infrastructure and innovation, paving the way for transformative advancements in the data ecosystem.
Qualifications:
- Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
- Certifications with related field will be an added advantage.
Key Competencies:
- Must have experience with SQL, Python and Hadoop
- Good to have experience with Cloud Computing Platforms (AWS, Azure, GCP, etc.), DevOps Practices, Agile Development Methodologies
- ETL or other similar technologies will be an advantage.
- Core Skills: Proficiency in SQL, Python, or Scala for data processing and manipulation
- Data Platforms: Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Tools: Familiarity with tools like Apache Spark, Kafka, and modern data warehouses (e.g., Snowflake, Big Query, Redshift).
- Soft Skills: Strong problem-solving abilities, collaboration, and communication skills to work effectively with technical and non-technical teams.
- Additional: Knowledge of SAP would be an advantage
Key Responsibilities:
- Data Pipeline Development: Build, maintain, and optimize ETL/ELT pipelines for seamless data flow.
- Data Integration: Consolidate data from various sources into unified systems.
- Database Management: Design and optimize scalable data storage solutions.
- Data Quality Assurance: Ensure data accuracy, consistency, and completeness.
- Collaboration: Work with analysts, scientists, and stakeholders to meet data needs.
- Performance Optimization: Enhance pipeline efficiency and database performance.
- Data Security: Implement and maintain robust data security and governance policies
- Innovation: Adopt new tools and design scalable solutions for future growth.
- Monitoring: Continuously monitor and maintain data systems for reliability.
- Data Engineers ensure reliable, high-quality data infrastructure for analytics and decision-making.
Job brief
An ideal candidate will have 3 to 6 years of experience working in live projects related to data analysis or Business Intelligence.
Responsibilities:
- Work with business groups and technical teams to develop and maintain BI Reports & Dashboards.
- Designing, developing, and maintaining complex BI solutions, such as dashboards, reports, and data visualizations.
- Knowledge on extract data from various sources such as files, cloud, and databases & load to data warehouse and data lake.
- Responsible for performance tuning and optimization of BI solutions and ensure that the BI solutions perform efficiently and provide quick responses.
- Provide technical support during weekends, after-hours and holidays when needed.
Skills Areas
- Good knowledge of Data Analysis and Data Visualization and BI performance optimization techniques.
- Experience in developing Reports & Dashboards, relational and multidimensional models, report migration & Upgrade activities.
- Visualization Tools: Experience in one or more following visualization tools (Amazon QuickSight(Preferred), Bold BI, Power BI, Tableau, Qlik)
- Strong knowledge on writing simple and Complex SQL Query
- Knowledge in ETL job development using Informatica, Data stage or Any other ETL tools
- Knowledge in one or more of the following domains – fraud and risk, retail payments, banking, and financial services.
- Willing to learn new technologies and implement as per business requirement on-demand basis.
- A Self-Motivated Challenging Professional with excellent
- Problem solving Skills.
- Business Communication skills (Written & Verbal)
- Presentation & Documentation skills
- Mentoring and People Management skills
Experience
The ideal candidate will have relevant experience of > 3 years. Possession of a professional degree / post-graduation is desirable. Certifications in the respective technology areas will be an added advantage.
Required Qualifications
- 4+ years of professional software development experience
- 2+ years contributing to service design and architecture
- Strong expertise in modern languages like Golang, Python
- Deep understanding of scalable, cloud-native architectures and microservices
- Production experience with distributed systems and database technologies
- Experience with Docker, software engineering best practices
- Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
- Experience with Golang, AWS, and Kubernetes
- CI/CD pipeline experience with GitHub Actions
Start-up environment experience
Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)
Experience Required: 8+ Years
Work Mode: Remote
We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.
The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.
Key Responsibilities & Focus Areas:
- ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
- AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
- Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
- Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
- Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.
Non-Negotiable Requirements (The "Must-Haves"):
- 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
- Deep expertise in PySpark/Python for distributed data processing.
- Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
- Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
- Hands-on experience with HIPAA/GDPR compliance in data pipeline design.
About the Role
We are looking for a passionate GenAI Developer to join our dynamic team at Hardwin Software Solutions. In this role, you will design and develop scalable backend systems, leverage AWS services for data processing, and work on cutting-edge Generative AI solutions. If you enjoy solving complex problems and building impactful applications, we’d love to hear from you.
What You Will Do
- Develop robust and scalable backend services and APIs using Python, integrating with various AWS services.
- Design, implement, and maintain data processing pipelines leveraging AWS (e.g., S3, Lambda).
- Collaborate with cross-functional teams to translate requirements into efficient technical solutions.
- Write clean, maintainable code while following agile engineering practices (CI/CD, version control, release cycles).
- Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python techniques.
- Contribute to the development and integration of Generative AI techniques into business applications.
What You Should Have
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3+ years of professional experience in software development.
- Strong programming skills in Python and good understanding of data structures & algorithms.
- Hands-on experience with AWS services: S3, Lambda, DynamoDB, OpenSearch.
- Experience with Relational Databases, Source Control, and CI/CD pipelines.
- Practical knowledge of Generative AI techniques (mandatory).
- Strong analytical and mathematical problem-solving abilities.
- Excellent communication skills in English.
- Ability to work both independently and collaboratively, with a proactive and self-motivated attitude.
- Strong organizational skills with the ability to prioritize tasks and meet deadlines.
Required Skills & Experience
- Must have 8+ years relevant experience in Java Design Development.
- Extensive experience working on solution design and API design.
- Experience in Java development at an enterprise level (Spring Boot, Java 17+, Spring Security, Microservices, Spring).
- Extensive work experience in monolithic applications using Spring.
- Extensive experience leading API development and integration (REST/JSON).
- Extensive work experience using Apache Camel.
- In-depth technical knowledge of database systems (Oracle, SQL Server).
- Ability to refactor and optimize existing code for performance, readability, and maintainability.
- Experience working with Continuous Delivery/Continuous Integration (CI/CD) pipelines.
- Experience in container platforms (Docker, OpenShift, Kubernetes).
- DevOps knowledge including:
- Configuring continuous integration, deployment, and delivery tools like Jenkins or Codefresh
- Container-based development using Docker, Kubernetes, and OpenShift
- Instrumenting monitoring and logging of applications
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js
Criteria:
Need candidates from Growing startups or Product based companies only
1. 4–6 years’ experience in backend engineering
2. Minimum 2+ years hands-on experience with:
- TypeScript
- Express.js / Nest.js
3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)
4. Strong understanding of system design & scalable architecture
5. Hands-on experience in:
- Event-driven architecture / Domain-driven design
- MVC / Microservices
6. Strong in automated testing (especially integration tests)
7. Experience with CI/CD pipelines (GitHub Actions or similar)
8. Experience managing production systems
9. Solid understanding of performance, reliability, observability
10. Cloud experience (AWS preferred; GCP/Azure acceptable)
11. Strong coding standards — Clean Code, code reviews, refactoring
Description
About the opportunity
We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.
As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.
What you will be doing
- Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
- Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
- Design scalable platforms that empower our product and marketing teams to rapidly experiment.
- Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
- Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
- Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
- Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.
The role could be ideal for you if you
- Experience of 4-6 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
- Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
- Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
- Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
- Experience in observability techniques like code instrumentation for metrics, tracing and logging.
- Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
- Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
- Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
- Can take ownership of goals and deliver them with high accountability.
Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Requirements
- 6–12 years of backend development experience.
- Strong expertise in Java 11+, Spring Boot, REST APIs, AWS.
- Solid experience with distributed, high-volume systems.
- Strong knowledge of RDBMS (e.g., MySQL, Oracle) and NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra).
- Hands-on with CI/CD (Jenkins) and caching technologies Redis or Similar.
- Strong debugging and system troubleshooting skills.
- Experience in payments system is a must.
Job Summary
We are seeking a Senior Software Engineer with strong expertise in .NET Core, C#, and microservices architecture to design and develop secure, high-performance applications. The ideal candidate will have hands-on experience with AWS cloud services, SQL (queries, stored procedures, performance tuning), and a strong focus on non-functional requirements (NFRs) such as scalability, security, and reliability. Exposure to Angular or React for front-end development is a plus.
Key Responsibilities
Application Development:
Design, develop, and maintain enterprise-grade applications using .NET Core, C#, and
microservices architecture.
Cloud Integration:
Deploy and manage applications on AWS, leveraging services like EC2, S3, Lambda, and API
Gateway.
SQL Development:
Write and optimize SQL queries, stored procedures, and functions for high-performance data access.
Caching & Performance:
Implement caching strategies (e.g., Redis) and optimize application performance for low latency and high throughput.
NFR & Security Compliance:
Ensure applications meet performance, scalability, and security requirements.
Implement secure coding practices and authentication/authorization mechanisms.
DevOps & CI/CD:
Integrate applications into CI/CD pipelines and automate deployments using AWS tools or Jenkins.
Collaboration:
Work closely with architects, QA, and product teams to deliver high-quality solutions.
Front-End (Good to Have):
Exposure to Angular or React for building responsive user interfaces.
Required Skills & Qualifications
Experience: 6+ years in software development with strong .NET expertise.
Proficiency in C#, .NET Core, and microservices architecture.
Hands-on experience with AWS cloud services (mandatory).
Strong skills in SQL queries, stored procedures, and performance tuning.
Experience with caching technologies (Redis or similar).
Understanding of NFRs (performance, security, scalability) and ability to implement them.
Familiarity with CI/CD pipelines and DevOps practices.
Excellent problem-solving and communication skills.
Preferred Qualifications
Experience with Angular or React for front-end development.
Knowledge of containerization (Docker, Kubernetes).
AWS certification is a plus.
Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.
Key Responsibilities:
- Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
- Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
- Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
- Optimize workloads post-migration for cost, performance, and security.
- Collaborate with stakeholders to define migration roadmaps and timelines.
- Perform data migration, application re-architecture, and hybrid cloud setups.
- Monitor migration progress, troubleshoot issues, and ensure business continuity.
- Document processes and provide post-migration support and training.
- Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.
Required Qualifications:
- 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
- AWS Certified Solutions Architect or Migration Specialty certification preferred.
- Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
- Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
- Experience with infrastructure as code (CloudFormation, Terraform).
- Proficiency in scripting (Python, PowerShell) and automation.
- Familiarity with security best practices (IAM, encryption, compliance).
- Hands-on experience with Kubernetes/EKS networking components and best practices.
Preferred Skills:
- Experience with hybrid/multi-cloud environments.
- Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
- Excellent problem-solving and communication skills.
JOB TITLE: Senior Full Stack Developer (SDE-3)
LOCATION: Remote/Hybrid.
A LITTLE BIT ABOUT THE ROLE:
As a Full Stack Developer, you will be responsible for developing digital systems that deliver optimal end-to-end solutions to our business needs. The work will cover all aspects of software delivery, including working with staff, vendors, and outsourced contributors to build, release and maintain the product.
Fountane operates a scrum-based Agile delivery cycle, and you will be working within this. You will work with product owners, user experience, test, infrastructure, and operations professionals to build the most effective solutions.
WHAT YOU WILL BE DOING:
- Full-stack development on a multinational team on various products across different technologies and industries.
- Optimize the development process and identify continuing improvements.
- Monitor technology landscape, assess and introduce new technology. Own and communicate development processes and standards.
- The job title does not define or limit your duties, and you may be required to carry out other work within your abilities from time to time at our request. We reserve the right to introduce changes in line with technological developments which may impact your job duties or methods of working.
WHAT YOU WILL NEED TO BE GREAT IN THIS ROLE:
- Minimum of 3+ years of full-stack development, combined back and front-end experience building fast, reliable web and/or mobile applications.
- Experience with Web frameworks (e.g., React, Angular or Vue) and/or mobile development (e.g., Native, Native Script, React)
- Proficient in at least one JavaScript framework such as React, NodeJs, AngularJS (2. x), or jQuery.
- Ability to optimize product development by leveraging software development processes.
- Bachelor's degree or equivalent (minimum six years) of work experience. If you have an Associate’s Degree must have a minimum of 4 years of work experience.
- Fountane's current technology stack driving our digital products includes React.js, Node.js, React Native, Angular, Firebase, Bootstrap, MongoDB, Express, Hasura, GraphQl, Amazon Web Services(AWS), and Google Cloud Platform.
SOFT SKILLS:
- Collaboration - Ability to work in teams across the world
- Adaptability - situations are unexpected, and you need to be quick to adapt
- Open-mindedness - Expect to see things outside the ordinary
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially.
Qualifications - No bachelor's degree required. Good communication skills are a must!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one
Strong DevSecOps / Cloud Security profile
Mandatory (Experience 1) – Must have 8+ years total experience in DevSecOps / Cloud Security / Platform Security roles securing AWS workloads and CI/CD systems.
Mandatory (Experience 2) – Must have strong hands-on experience securing AWS services (including but not limited to) KMS, WAF, Shield, CloudTrail, AWS Config, Security Hub, Inspector, Macie and IAM governance
Mandatory (Experience 3) – Must have hands-on expertise in Identity & Access Security including RBAC, IRSA, PSP/PSS, SCPs and IAM least-privilege enforcement
Mandatory (Experience 4) – Must have hands-on experience with security automation using Terraform and Ansible for configuration hardening and compliance
Mandatory (Experience 5) – Must have strong container & Kubernetes security experience including Docker image scanning, EKS runtime controls, network policies, and registry security
Mandatory (Experience 6) – Must have strong CI/CD pipeline security expertise including SAST, DAST, SCA, Jenkins Security, artifact integrity, secrets protection, and automated remediation
Mandatory (Experience 7) – Must have experience securing data & ML platforms including databases, data centers/on-prem environments, MWAA/Airflow, and sensitive ETL/ML workflows
Mandatory (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Core Responsibilities:
- The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
- Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
- Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
- Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
- System Integration: Integrate models into existing systems and workflows.
- Model Deployment: Deploy models to production environments and monitor performance.
- Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
- Continuous Improvement: Identify areas for improvement in model performance and systems.
Skills:
- Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
- Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaossearch logs, etc for troubleshooting; Other tech touch points are ScyllaDB (like BigTable), OpenSearch, Neo4J graph
- Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
- Knowledge of model monitoring and performance evaluation.
Required experience:
- Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sagemaker pipeline with ability to analyze gaps and recommend/implement improvements
- AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
- AWS data: Redshift, Glue
- Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Skills: Aws, Aws Cloud, Amazon Redshift, Eks
Must-Haves
Machine Learning +Aws+ (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sagemaker
Notice period - 0 to 15days only
Hybrid work mode- 3 days office, 2 days at home
Designation: Software Development Team Lead (Full-Stack)
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead (Full-Stack)
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
- Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Designation: Software Development Team Lead (Full-Stack)
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead (Full-Stack)
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
- Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Designation: Software Development Team Lead
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Designation: Software Development Team Lead
Location: Bangalore/Bhopal
Package: 8LPA to 15 LPA
Job Type: Full Time
Experience: 6 to 10+
Job Title: Software Development Team Lead
We are seeking an experienced Software Development Team Lead with strong capabilities in handling frontend, backend, and full-stack development teams across multiple technologies. The ideal candidate will have hands-on experience in Python, Next.js, and other modern tech stacks, along with the ability to guide a diverse development team, ensure high-quality delivery, and drive end-to-end project execution.
Key Responsibilities
Team Leadership & Multi-Tech Management
- Lead and manage a team of developers working across frontend, backend, and full-stack technologies.
- Provide technical direction, conduct code reviews, and mentor team members.
- Allocate tasks based on strengths (UI, backend, APIs, DevOps, etc.) and ensure balanced workload.
- Foster a collaborative, innovative, and high-performance engineering culture.
Full-Stack Technical Contribution
- Work hands-on with Python (backend) and Next.js/React (frontend).
- Guide teams on best practices across UI development, API design, database architecture, and deployment.
- Ensure scalable, secure, and maintainable code across all layers of the product.
- Troubleshoot complex issues across frontend, backend, microservices, and integrations.
Project Execution & Delivery
- Manage end-to-end project lifecycle—from planning to development, testing, and deployment.
- Coordinate with Product, QA, UX/UI, DevOps, and Management teams.
- Drive sprint planning, task estimation, and timeline adherence.
- Improve delivery speed and quality through automation, CI/CD, and structured workflows.
Cross-Functional Collaboration
- Translate business requirements into technical specifications.
- Communicate project updates, challenges, and solutions to stakeholders.
- Collaborate with designers, product managers, and other engineering units.
Process Improvement
- Define and implement coding standards, architecture guidelines, and development processes.
- Introduce new technologies and best practices for continuous improvement.
- Promote efficient workflows, documentation, and team knowledge-sharing.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 6–10+ years of strong software development experience.
- Proven experience leading full-stack development teams.
Hands-on expertise in:
- Backend: Python (Django, Flask, FastAPI), API development
- Frontend: Next.js, React, JavaScript/TypeScript
- Databases: SQL/NoSQL
- Ability to manage teams working on multiple technologies (frontend, backend, APIs, DevOps).
- Experience with cloud services (AWS/Azure/GCP).
- Strong knowledge of CI/CD, Git workflows, containers (Docker), and microservices.
- Excellent communication, leadership, and problem-solving skills.
Preferred Qualifications
- Exposure to other technologies/frameworks (Node.js, Angular, Java, .NET, PHP, etc.)
- Experience managing cross-functional engineering teams (QA, DevOps, UI/UX).
- Understanding of scalable architectures, system design, and performance optimization.
Requirements:
- Strong experience in JAVA and J2EE technologies with Cloud based environment.
- Expert knowledge in JPA, Hibernate, JDBC, SQL, Spring, JUnit and JSON,
- REST/JSON web services.
- Strong knowledge in Java Design Patterns.
- Development and implementation of features in any Cloud platform products and technologies.
- Experience developing applications with Agile team methodologies preferred.
- Strong Object-Oriented design skills and understanding of MVC.
- Sufficient experience with Git to organize a large software project with multiple developers to include branching, tagging and merging.
Desired Skills:
- Experience in Azure cloud (PaaS) with Java is a plus.
- Strong business application design skills.
- Excellent communications and interpersonal skills.
- Strong debugging skills.
- Highly proficient in standard Java development tools (Eclipse, Maven, etc.)
- A strong interest in building security into applications from the initial design.
- Experience at creating technical project Documentation and task time estimates.
- In-depth knowledge of at least one high-level programming language
- Understanding of core AWS services, uses, and basic AWS architecture best
- practices
- Proficiency in developing, deploying, and debugging cloud-based applications using AWS
- Ability to use the AWS service APIs, AWS CLI, and SDKs to write applications
- Ability to identify key features of AWS services
- Understanding of the AWS shared responsibility model
- Understanding of application lifecycle management
- Ability to use a CI/CD pipeline to deploy applications on AWS
- Ability to use or interact with AWS services
- Ability to apply a basic understanding of cloud-native applications to write code
- Ability to write code using AWS security best practices (e.g., not using secret and access keys in the code, instead using IAM roles)
- Ability to author, maintain, and debug code modules on AWS
- Proficiency writing code for serverless applications
- Understanding of the use of containers in the development process
Company Overview:
Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.
As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.
Position Overview:
As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.
Work Location: Pune
Job Type: Hybrid
Key Responsibilities:
- Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
- Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
- Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
- Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
- Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
- Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
- Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
- Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
- Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
- Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
- Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
- Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
- Solid experience designing and delivering features with high quality on aggressive schedules.
- Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
- Familiarity with performance optimization techniques and principles for backend systems.
- Excellent problem-solving and critical-thinking abilities.
- Outstanding communication and collaboration skills.
Why Join Us:
- Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Professional growth and development opportunities.
- Chance to work on cutting-edge technology and products that make a real impact.
If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Job Title: Frappe Engineer
Location: Ahmedabad, Gujarat
About Us
We specialize in providing scalable, high-performance IT and software development teams through our Offshore Development Centre (ODC) model. As a part of the DevX Group, we enable global companies to establish dedicated, culturally aligned engineering teams in India combining world-class talent, infrastructure, and operational excellence. Our expertise spans AI led product engineering, data platforms, cloud solutions, and digital transformation initiatives for mid-market and enterprise clients worldwide.
Position Overview
We are looking for an experienced Frappe Engineer to lead the implementation, customization, and integration of ERPNext solutions. The ideal candidate should have strong expertise in ERP module customization, business process automation, and API integrations while ensuring system scalability and performance.
Key Responsibilities
ERPNext Implementation & Customization
- Design, configure, and deploy ERPNext solutions based on business requirements.
- Customize and develop new modules, workflows, and reports within ERPNext.
- Optimize system performance and ensure data integrity.
Integration & Development
- Develop custom scripts using Python and Frappe Framework.
- Integrate ERPNext with third-party applications via REST API.
- Automate workflows, notifications, and business processes.
Technical Support & Maintenance
- Troubleshoot ERP-related issues and provide ongoing support.
- Upgrade and maintain ERPNext versions with minimal downtime.
- Ensure security, scalability, and compliance in ERP implementations.
Collaboration & Documentation
- Work closely with stakeholders to understand business needs and translate them into ERP solutions.
- Document ERP configurations, custom scripts, and best practices.
Qualifications & Skills
- 3+ years of experience working with ERPNext and the Frappe Framework.
- Strong proficiency in Python, JavaScript, and SQL.
- Hands-on experience with ERPNext customization, report development, and API integrations.
- Knowledge of Linux, Docker, and cloud platforms (AWS, GCP, or Azure) is a plus.
- Experience in business process automation and workflow optimization.
- Familiarity with version control (Git) and Agile development methodologies.
Benefits:
- Competitive salary.
- Opportunity to lead a transformative ERP project for a mid-market client.
- Professional development opportunities.
- Fun and inclusive company culture.
- Five-day workweek.
Role Overview
As a Senior SQL Developer, you’ll be responsible for data extracts, updating, and maintaining reports as requested by stakeholders. You’ll work closely with finance operations and developers to ensure data requests are appropriately managed.
Key Responsibilities
- Design, develop, and optimize complex SQL queries, stored procedures, functions, and tasks across multiple databases/schemas.
- Transform cost-intensive models from full-refreshes to incremental loads based on upstream data.
- Help design proactive monitoring of data to catch data issues/data delays.
Qualifications
- 5+ years of experience as a SQL developer, preferably in a B2C or tech environment. • Ability to translate requirements into datasets.
- Understanding of dbt framework for transformations.
- Basic usage of git - branching/ PR generation.
- Detail-oriented with strong organizational and time management skills.
- Ability to work cross-functionally and manage multiple projects simultaneously.
Bonus Points
- Experience with Snowflake and AWS data technologies.
- Experience with Python and containers (Docker)
Job Description
Experience: 4 – 8 Years
Location: Hyderabad
Employment Type: Fulltime
Key Responsibilities
- Design, develop, and implement backend services using Java (latest version), Spring Boot, and Microservices architecture.
- Participate in the end-to-end development lifecycle, from requirement analysis to deployment and support.
- Collaborate with cross-functional teams (UI/UX, DevOps, Product) to deliver high-quality, scalable software solutions.
- Integrate APIs and manage data flow between services and front-end systems.
- Work on cloud-based deployment using AWS or GCP environments.
- Ensure performance, security, and scalability of services in production.
- Contribute to technical documentation, code reviews, and best practice implementations.
Required Skills
- Strong hands-on experience with Core Java (latest versions), Spring Boot, and Microservices.
- Solid understanding of RESTful APIs, JSON, and distributed systems.
- Basic knowledge of Kubernetes (K8s) for containerization and orchestration.
- Working experience or strong conceptual understanding of cloud platforms (AWS / GCP).
- Exposure to CI/CD pipelines, version control (Git), and deployment automation.
- Familiarity with security best practices, logging, and monitoring tools.
Preferred Skills
- Experience with end-to-end deployment on AWS or GCP.
- Familiarity with payment gateway integrations or fintech applications.
- Understanding of DevOps concepts and infrastructure-as-code tools (Added advantage).
Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).
And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.
However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?
Key Responsibilities:
- Lead the design, development, and deployment of complex software systems and applications.
- Collaborate with engineers and product managers to define and implement innovative solutions
- Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
- Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
- Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
- Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
- Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.
Qualifications:
- 5+ years of professional software development experience, with a strong focus on backend development.
- Proficiency in one or more programming languages such as Java, Python, Golang or C#
- Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
- Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
- Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
- Proven track record of designing and implementing scalable, high-performance systems.
- Excellent problem-solving skills and the ability to think critically and creatively.
- Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Title: Senior DevOps Engineer (Cybersecurity & VAPT)
Location: Vashi (On-site)
Shift: 10:00 AM – 7:00 PM
Experience: 5+ years
Job Summary
Hiring a Senior DevOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
Manage deployments on AWS/Azure
Maintain Linux servers & cloud environments
Ensure uptime, performance, and scalability
CI/CD & Automation
Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
Automate tasks using Bash/Python
Implement IaC (Terraform/CloudFormation)
Containerization
Build and run Docker containers
Work with basic Kubernetes concepts
Cybersecurity & VAPT
Perform Vulnerability Assessment & Penetration Testing
Identify, track, and mitigate security vulnerabilities
Implement hardening and support DevSecOps practices
Assist with firewall/security policy management
Monitoring & Troubleshooting
Use ELK, Prometheus, Grafana, CloudWatch
Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
Work with Dev, QA, and Security for secure releases
Maintain documentation and best practices
Required Skills
AWS/Azure, Linux, Docker
CI/CD tools: Jenkins, GitHub Actions, GitLab
Terraform / IaC
VAPT experience + understanding of OWASP, cloud security
Bash/Python scripting
Monitoring tools (ELK, Prometheus, Grafana)
Strong troubleshooting & communication
Key Responsibilities:
- Application Development: Design and implement both client-side and server-side architecture using JavaScript frameworks and back-end technologies like Golang.
- Database Management: Develop and maintain relational and non-relational databases (MySQL, PostgreSQL, MongoDB) and optimize database queries and schema design.
- API Development: Build and maintain RESTfuI APIs and/or GraphQL services to integrate with front-end applications and third-party services.
- Code Quality & Performance: Write clean, maintainable code and implement best practices for scalability, performance, and security.
- Testing & Debugging: Perform testing and debugging to ensure the stability and reliability of applications across different environments and devices.
- Collaboration: Work closely with product managers, designers, and DevOps engineers to deliver features aligned with business goals.
- Documentation: Create and maintain documentation for code, systems, and application architecture to ensure knowledge transfer and team alignment.
Requirements:
- Experience: 1+ years in backend development in micro-services ecosystem, with proven experience in front-end and back-end frameworks.
- 1+ years experience Golang is mandatory
- Problem-Solving & DSA: Strong analytical skills and attention to detail.
- Front-End Skills: Proficiency in JavaScript and modern front-end frameworks (React, Angular, Vue.js) and familiarity with HTML/CSS.
- Back-End Skills: Experience with server-side languages and frameworks like Node.js, Express, Python or GoLang.
- Database Knowledge: Strong knowledge of relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB).
- API Development: Hands-on experience with RESTfuI API design and integration, with a plus for GraphQL.
- DevOps Understanding: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a bonus.
- Soft Skills: Excellent problem-solving skills, teamwork, and strong communication abilities.
Nice-to-Have:
- UI/UX Sensibility: Understanding of responsive design and user experience principles.
- CI/CD Knowledge: Familiarity with CI/CD tools and workflows (Jenkins, GitLab CI).
- Security Awareness: Basic understanding of web security standards and best practices.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
About the role:
We are looking for a Senior Site Reliability Engineer who understands the nuances of production systems. If you care about building and running reliable software systems in production, you'll like working at One2N.
You will primarily work with our startups and mid-size clients. We work on One-to-N kind problems (hence the name One2N), those where Proof of concept is done and the work revolves around scalability, maintainability, and reliability. In this role, you will be responsible for architecting and optimizing our observability and infrastructure to provide actionable insights into performance and reliability.
Responsibilities:
- Conceptualise, think, and build platform engineering solutions with a self-serve model to enable product engineering teams.
- Provide technical guidance and mentorship to young engineers.
- Participate in code reviews and contribute to best practices for development and operations.
- Design and implement comprehensive monitoring, logging, and alerting solutions to collect, analyze, and visualize data (metrics, logs, traces) from diverse sources.
- Develop custom monitoring metrics, dashboards, and reports to track key performance indicators (KPIs), detect anomalies, and troubleshoot issues proactively.
- Improve Developer Experience (DX) to help engineers improve their productivity.
- Design and implement CI/CD solutions to optimize velocity and shorten the delivery time.
- Help SRE teams set up on-call rosters and coach them for effective on-call management.
- Automating repetitive manual tasks from CI/CD pipelines, operations tasks, and infrastructure as code (IaC) practices.
- Stay up-to-date with emerging technologies and industry trends in cloud-native, observability, and platform engineering space.
Requirements:
- 6-9 years of professional experience in DevOps practices or software engineering roles, with a focus on Kubernetes on an AWS platform.
- Expertise in observability and telemetry tools and practices, including hands-on experience with some of Datadog, Honeycomb, ELK, Grafana, and Prometheus.
- Working knowledge of programming using Golang, Python, Java, or equivalent.
- Skilled in diagnosing and resolving Linux operating system issues.
- Strong proficiency in scripting and automation to build monitoring and analytics solutions.
- Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies.
- Experience with infrastructure as code (IaC) tools such as Terraform, Pulumi.
- Excellent analytical and problem-solving skills, keen attention to detail, and a passion for continuous improvement.
- Strong written, communication, and collaboration skills, with the ability to work effectively in a fast-paced, agile environment.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Key Responsibilities
- Design, develop, and implement backend services using Java (latest version), Spring Boot, and Microservices architecture.
- Participate in the end-to-end development lifecycle, from requirement analysis to deployment and support.
- Collaborate with cross-functional teams (UI/UX, DevOps, Product) to deliver high-quality, scalable software solutions.
- Integrate APIs and manage data flow between services and front-end systems.
- Work on cloud-based deployment using AWS or GCP environments.
- Ensure performance, security, and scalability of services in production.
- Contribute to technical documentation, code reviews, and best practice implementations.
Required Skills:
- Strong hands-on experience with Core Java (latest versions), Spring Boot, and Microservices.
- Solid understanding of RESTful APIs, JSON, and distributed systems.
- Basic knowledge of Kubernetes (K8s) for containerization and orchestration.
- Working experience or strong conceptual understanding of cloud platforms (AWS / GCP).
- Exposure to CI/CD pipelines, version control (Git), and deployment automation.
- Familiarity with security best practices, logging, and monitoring tools.
Preferred Skills:
- Experience with end-to-end deployment on AWS or GCP.
- Familiarity with payment gateway integrations or fintech applications.
- Understanding of DevOps concepts and infrastructure-as-code tools (Added advantage).
Review Criteria
- Strong Senior Data Engineer profile
- 4+ years of hands-on Data Engineering experience
- Must have experience owning end-to-end data architecture and complex pipelines
- Must have advanced SQL capability (complex queries, large datasets, optimization)
- Must have strong Databricks hands-on experience
- Must be able to architect solutions, troubleshoot complex data issues, and work independently
- Must have Power BI integration experience
- CTC has 80% fixed and 20% variable in their ctc structure
Preferred
- Worked on Call center data, understand nuances of data generated in call centers
- Experience implementing data governance, quality checks, or lineage frameworks
- Experience with orchestration tools (Airflow, ADF, Glue Workflows), Python, Delta Lake, Lakehouse architecture
Job Specific Criteria
- CV Attachment is mandatory
- Are you Comfortable integrating with Power BI datasets?
- We have an alternate Saturdays working. Are you comfortable to WFH on 1st and 4th Saturday?
Role & Responsibilities
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
Ideal Candidate
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
ROLES AND RESPONSIBILITIES:
We are seeking a highly experienced Senior Data Engineer with strong architectural capability, excellent optimisation skills, and deep hands-on experience in modern data platforms. The ideal candidate will have advanced SQL skills, strong expertise in Databricks, and practical experience working across cloud environments such as AWS and Azure. This role requires end-to-end ownership of complex data engineering initiatives, including architecture design, data governance implementation, and performance optimisation. You will collaborate with cross-functional teams to build scalable, secure, and high-quality data solutions.
Key Responsibilities-
- Lead the design and implementation of scalable data architectures, pipelines, and integration frameworks.
- Develop, optimise, and maintain complex SQL queries, transformations, and Databricks-based data workflows.
- Architect and deliver high-performance ETL/ELT processes across cloud platforms.
- Implement and enforce data governance standards, including data quality, lineage, and access control.
- Partner with analytics, BI (Power BI), and business teams to enable reliable, governed, and high-value data delivery.
- Optimise large-scale data processing, ensuring efficiency, reliability, and cost-effectiveness.
- Monitor, troubleshoot, and continuously improve data pipelines and platform performance.
- Mentor junior engineers and contribute to engineering best practices, standards, and documentation.
IDEAL CANDIDATE:
- Proven industry experience as a Senior Data Engineer, with ownership of high-complexity projects.
- Advanced SQL skills with experience handling large, complex datasets.
- Strong expertise with Databricks for data engineering workloads.
- Hands-on experience with major cloud platforms — AWS and Azure.
- Deep understanding of data architecture, data modelling, and optimisation techniques.
- Familiarity with BI and reporting environments such as Power BI.
- Strong analytical and problem-solving abilities with a focus on data quality and governance
- Proficiency in python or another programming language in a plus.
PERKS, BENEFITS AND WORK CULTURE:
Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.
Planview is seeking a passionate Sr Software Engineer I to lead the development of internal AI tools and connectors, enabling seamless integration with internal and third-party data sources. This role will drive internal AI enablement and productivity across engineering and customer teams by consulting with business stakeholders, setting technical direction, and delivering scalable solutions.
Responsibilities:
- Work with business stakeholders to enable successful AI adoption.
- Develop connectors leveraging MCP or third-party APIs to enable new integrations.
- Prioritize and execute integrations with internal and external data platforms.
- Collaborate with other engineers to expand AI capabilities.
- Establish and monitor uptime metrics, set up alerts, and follow a proactive maintenance schedule.
- Exposure to operations, including Docker-based and serverless deployments and troubleshooting.
- Work with DevOps engineers to manage and deploy new tools as required.
Required Qualifications:
- Bachelor’s degree in computer science, Data Science, or related field.
- 4+ years of experience in infrastructure engineering, data integration, or AI operations.
- Strong Python coding skills.
- Experience configuring and scaling infrastructure for large user bases.
- Proficiency with monitoring tools, alerting systems, and maintenance best practices.
- Hands-on experience with containerized and serverless deployments.
- Ability to code connectors using MCP or third-party APIs.
- Strong troubleshooting and support skills.
Preferred Qualifications:
- Experience with building RAG knowledge bases, MCP Servers, and API integration patterns. Experience leveraging AI (LLMs) to boost productivity and streamline workflows.
- Exposure to working with business stakeholders to drive AI adoption and feature expansion. Familiarity with MCP server support and resilient feature design.
- Skilled at working as part of a global, diverse workforce.
- AWS Certification is a plus.
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
About BXI
BXI is India’s leading barter marketplace, enabling brands and businesses to trade products and services through a powerful digital platform. We are expanding our tech team and looking for a talented Full Stack Developer who can build, scale and optimise our JavaScript-based product ecosystem.
Key Responsibilities
- Develop, maintain, and enhance web applications using React.js (frontend) and Node.js (backend).
- Architect and implement scalable APIs and microservices.
- Deploy, manage, and monitor applications on AWS cloud infrastructure.
- Collaborate with Product, Design, and Business teams to understand requirements and translate them into technical solutions.
- Optimise application performance, enhance UI/UX, and improve overall platform stability.
- Troubleshoot, debug, and resolve complex technical issues.
- Follow coding standards, maintain proper documentation, and participate in code reviews.
- Contribute ideas for continuous improvement in architecture and development processes.
Required Skills & Experience
- 2–4 years of experience as a Full Stack Developer.
- Strong proficiency in JavaScript, React.js, Node.js, and related frameworks.
- Hands-on experience with AWS services (EC2, Lambda, S3, RDS, CloudWatch, etc.).
- Experience in building and integrating RESTful APIs.
- Excellent understanding of database technologies (MySQL, MongoDB, or similar).
- Familiarity with version control tools like Git.
- Strong debugging, problem-solving, and analytical skills.
- Ability to work independently as well as in a collaborative team environment.
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
Company Overview
McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.
Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era
Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.
Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.
At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:
- Precision prospecting
- Intent-based targeting
- Data enrichment from 16+ premium sources
- AI-driven workflows to book more meetings, faster
We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.
EXPERIENCE
Duties you'll be entrusted with:
- Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
- Writing efficient, reusable, testable, and scalable code.
- Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
- Integration of user-oriented elements into different applications, data storage solutions
- Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
- Designing and implementing – High availability and low latency applications, data protection and security features
- Performance tuning and automation of applications and enhancing the functionalities of current software systems.
- Keeping abreast with the latest technology and trends.
Expectations from you:
Basic Requirements
- Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
- Experience with Cloud platforms (AWS, Azure, GCP).
- Strong understanding of monitoring, logging, and observability practices.
- Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
- Expertise in designing, implementing, and optimizing Elasticsearch.
- Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
- Expertise in Event driven architecture.
- Experience in Integrating Generative AI APIs.
- Working experience in high user concurrency.
- Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,
Technical Skills
- Demonstrable experience in web application development with expertise in Node.js or Nest.js.
- Knowledge of database technologies and agile development methodologies.
- Experience working with databases, such as MySQL or MongoDB.
- Familiarity with web development frameworks, such as Express.js.
- Understanding of microservices architecture and DevOps principles.
- Well-versed with AWS and serverless architecture.
Soft Skills
- A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
- Potential to apply innovative and exciting ideas, concepts, and technologies.
- Stay up-to-date with the latest design trends, animation techniques, and software advancements.
- Multi-tasking and time-management skills, with the ability to prioritize tasks.
THRIVE
Some of the extensive benefits of being part of our team:
- We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
- The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
- The McKinley Cares Program has a wide range of benefits:
- The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
- In-house benefits have a referral bonus window and sponsored social functions.
- An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum.
- Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
- In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
- We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools
We’re seeking a highly skilled, execution-focused Senior Backend Engineer with a minimum of 5 years of experience to join our team. This role demands hands-on expertise in building and scaling distributed systems, strong proficiency in Java, and deep knowledge of cloud-native infrastructure. You will be expected to design robust backend services, optimize performance across storage and caching layers, and enable seamless integrations using modern messaging and CI/CD pipelines.
You’ll be working in a high-scale, high-impact environment where reliability, speed, and efficiency are paramount. If you enjoy solving complex engineering challenges and have a passion for distributed systems, this is the right role for you.
Responsibilities:
- Design, develop, and maintain distributed backend systems at scale.
- Write high-performance, production-grade code in Java or Kotlin.
- Architect and optimize storage systems, ensuring efficient query performance and scalable data models.
- Implement caching strategies to reduce latency and improve system throughput.
- Build and manage services leveraging AWS cloud infrastructure.
- Develop resilient messaging pipelines using Kafka (or equivalent) for real-time data processing.
- Define and streamline CI/CD pipelines, ensuring rapid and reliable deployment cycles.
- Collaborate with product managers, frontend engineers, and DevOps to deliver end-to-end solutions.
- Monitor system performance, identify bottlenecks, and apply proactive fixes.
- Drive best practices in software engineering, testing, and code reviews.
Key focus areas include:
- Performance optimization, reliability, and low-latency API design
- Microservices architecture and cloud-native development (Kubernetes, Docker, CI/CD)
- Experience with large-scale systems, monitoring, and performance profiling
- Deep understanding of concurrency,
- JVM tuning, and scalable data handling
Requirements:
- 5+ years of experience in backend engineering, with deep hands-on coding experience.
- Strong proficiency in Java or Kotlin and familiarity with modern frameworks.
- Strong hands-on expertise in Spring Boot or Quarkus frameworks.
- Proven track record in building scalable distributed systems.
- Hands-on expertise with AWS services (e.g., EC2, S3, Lambda, DynamoDB, RDS).
- Solid understanding of messaging systems like Kafka, RabbitMQ, or similar.
- Strong grasp of query performance optimization and storage system design.
- Experience with caching solutions (Redis, Memcached, etc.).
- Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.).
- Excellent problem-solving skills and ability to thrive in fast-paced environments.
- Strong communication and collaboration skills, with a proactive mindset.
Benefits:
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.
Job Title: Practice Head - Cloud Business Development
Experience: 10 - 13 Years
Location: Bangalore
Territory Focus: India, MENA, and SEA
About Pacewisdom Solutions:
Pacewisdom is a deep-tech product engineering and consulting firm. As an AWS Advanced Tier Partner, our Cloud Center of Excellence is a high-growth vertical specialized in Migration, Modernization, and Cloud Strategy.
We are seeking a hands-on sales leader to drive our cloud business expansion across Indian and international markets. Role Overview As Practice Head, you will spearhead new business acquisition for the Cloud practice. This is a hunter role focused on securing contracts for Cloud Migration, Modernization, and Managed Services. The role offers the opportunity to build a dedicated sales team under your leadership within the first 12-18 months.
Key Responsibilities
• Identify and penetrate high-potential enterprise and mid-market accounts across diversified verticals in India, MENA, and South East Asia.
• Drive the full sales cycle for cloud transformation deals, ensuring a healthy balance of Consulting, Implementation, and recurring Managed Services revenue.
• Leverage our established collaboration with the AWS Partner Network and commercial distributors to drive co-selling models to accelerate market access and utilize funding programs like MAP/OLA for deal closure.
• Execute an active field sales strategy by frequently visiting AWS and partner offices to build personal trust with Account Managers and representing the company at offline industry events.
• Move beyond transactional sales to close high-value engagements by conducting strategic discussions with C-level executives regarding Total Cost of Ownership, compliance, and modernization architecture.
• Build a predictable sales pipeline to meet aggressive growth targets, with a specific focus on executing larger ticket-size projects rather than smaller ad-hoc tasks.
Candidate Requirements
• Total IT sales experience of 10 to 13 years, with the last 5 years strictly focused on Cloud Services sales, preferably with exposure to the AWS ecosystem.
• Demonstrated history of closing individual deals valued over 1 Cr annually and experience managing or contributing to an annual portfolio revenue of 10-20 Cr.
• Hands-on experience in structuring profitable deals using cloud partner incentives and navigating cross-border sales in emerging markets like MENA or SEA.
• Ability to articulate technical differentiators to a non-technical audience and commercial value to technical stakeholders without constantly relying on presales support. Compensation & Growth
• Competitive market standard fixed compensation with an aggressive, performance based incentive structure directly linked to deal closures, recurring revenue, and partner funding optimization.
• Backed by strong operational support and direct access to the AWS partner ecosystem.
• Clear roadmap to scale the vertical, with the mandate to hire and groom a reporting sales team upon achieving initial annual targets.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business.


























