50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
As a Software Development Engineer II at Hiver, you will have a critical role to play to build and scale our product to thousands of users globally. We are growing very fast and process over 5 million emails daily for thousands of active users on the Hiver platform. You will get a chance to work with and mentor a group of smart engineers as well as learn and grow yourself working with very good mentors. You’ll get the opportunity to work on complex technical challenges such as making the architecture scalable to handle the growing traffic, building frameworks to monitor and improve the performance of our systems, and improving reliability and performance. Code, design, develop and maintain new product features. Improve the existing services for performance and scalability.
What will you be working on?
- Build a new API for our users, or iterate on existing APIs in monolith applications.
- Build event-driven architecture using highly scalable message brokers like Kafka, RabbitMQ, etc.
- Build microservices based on the performance and efficiency needs.
- Build frameworks to monitor and improve the performance of our systems.
- Build and upgrade systems to securely store sensitive data.
- Design, build and maintain APIs, services, and systems across Hiver's engineering teams.
- Debug production issues across services and multiple levels of the stack.
- Work with engineers across the company to build new features at large-scale.
- Improve engineering standards, tooling, and processes.
What we are looking for?
- Have worked in scaling backend systems over at least 3 years.
- Knowledge of Ruby on Rails (RoR) or Pythonis good to have, with hands-on experience in at least one project.
- Have worked on the microservice and event-driven architecture.
- Have worked on technologies like Kafka in building data pipelines with a high volume of data.
- Enjoy and have experience building Lean APIs and amazing backend services.
- Think about systems and services and write high-quality code. We care much more about your general engineering skill than - knowledge of a particular language or framework.
- Have worked extensively with SQL Databases and understand NoSQL databases and Caches.
- Have experience deploying applications on the cloud. We are on AWS, but experience with any cloud provider (GCP, Azure) would be great.
- Hold yourself and others to a high bar when working with production systems.
- Take pride in working on projects to successful completion involving a wide variety of technologies and systems.
- Thrive in a collaborative environment involving different stakeholders.
- Enjoy working with a diverse group of people with different expertise.
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Required Skills
• Minimum 3+yrs of experience in Python is mandatory.
• You are responsible for growing your team
• You take ownership of the product/service you are writing
• You are able to write clean, pragmatic, and testable code
• Comfortable with basic Unix commands (+ Shell scripting)
• Very Proficient in Git & GitHub
• Have a GitHub & StackOverflow profile
• Proficient in writing test-first code (a.k.a Writing testable code)
• You have to stick to the timeline of the sprint
• Must have worked on AWS.
Skills: Python 3.5+, Django 2.0 or higher or Flask, ORM (Django-ORM, SQL Alchemy), Celery, Redis/RabbitMQ, Elastic Search/Solr, Django Rest Framework, Graph QL, Pandas, NumPy, SciPy, Linux OS, GIT, DevOps, Docker, AWS,
knowledge on front end technologies is good to have (HTML5, CSS3, SASS/LESS, Object-Oriented Javascript, TypeScript).
Knowledge of Machine Learning/AI Concepts and Keen interest/exposure/experience in other languages (Golang, Elixir, Rust) is a huge plus.
Note: Salary will be offered based on your overall experience and last drawn salary.
Job Title: Python / Django Backend Developer
Experience: 3+ Years
Location: Gurgaon (Onsite)
Work Mode: 5 Days Working
About the Company
We are hiring for a product-based global furniture and homeware organization with operations in the UK and India. The company builds and maintains in-house digital platforms focused on design-to-delivery, supply-chain, and logistics. The team focuses on building scalable, high-performance internal systems.
Roles & Responsibilities
- Design, develop, and maintain RESTful APIs and backend services using Python & Django / Django REST Framework
- Build scalable, secure, and optimized database schemas and queries using PostgreSQL/MySQL
- Collaborate with frontend, product, and QA teams for end-to-end feature delivery
- Write clean, reusable, and testable code following best engineering practices
- Optimize application performance, reliability, and scalability
- Participate in code reviews, documentation, and CI/CD processes
- Deploy and manage backend services on cloud infrastructure and web servers
Required Skills & Qualifications
- Strong proficiency in Python and Django / Django REST Framework
- Solid understanding of relational databases (PostgreSQL/MySQL)
- Experience with REST API design, authentication & authorization
- Working knowledge of AWS services: EC2, ELB, S3, IAM, RDS
- Experience configuring and managing Nginx/Apache
- Familiarity with Git, Docker, and CI/CD workflows
- Strong problem-solving and debugging skills
Preferred Qualifications
- Experience with cloud platforms (AWS/GCP/Azure)
- Familiarity with microservices architecture
- Experience with Celery, RabbitMQ, Kafka
- Knowledge of testing frameworks (Pytest, unittest)
- Exposure to e-commerce platforms or high-traffic scalable systems
5+ yrs of experience in Cloud/DevOps roles.
Strong hands-on experience with AWS architecture, operations & automation (70% focus).
Solid Kubernetes/EKS administration experience (30% focus).
IaC experience (Terraform preferred).
Scripting (Python / Bash).
CI/CD tools (Jenkins, GitLab, GitHub Actions).
Experience working with BFSI or Managed Service projects is mandatory
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
Strong proficiency in React Native, JavaScript (ES6+), and TypeScript.
Solid experience with mobile app architecture and state management libraries (e.g., Redux, MobX).
Good knowledge of iOS and Android native modules and debugging tools.
Exposure to AWS services such as S3, Cognito, API Gateway, etc
Familiarity with mobile app CI/CD tools.
Experience with Git and Agile development
Hiring for SRE Lead
Exp: 7 - 12 yrs
Work Location : Mumbai ( Kurla West )
WFO
Skills :
Proficient in cloud platforms (AWS, Azure, or GCP), containerization (Kubernetes/Docker), and Infrastructure as Code (Terraform, Ansible, or Puppet).
Coding/Scripting: Strong programming or scripting skills in at least one language (e.g., Python, Go, Java) for automation and tooling development.
System Knowledge: Deep understanding of Linux/Unix fundamentals, networking concepts, and distributed systems.
Job Description: Full Stack Developer
Role Overview
We are seeking a skilled Full Stack Developer with a minimum of 3+ years of hands-on experience in building modern web and mobile applications. The ideal candidate will have strong expertise in React and/or Flutter on the frontend, backed by Java based backend development, and the ability to work across the full software delivery lifecycle.
This role requires a pragmatic engineer who can translate business requirements into scalable, maintainable solutions while collaborating effectively with product, QA, and DevOps teams.
Key Responsibilities
- Design, develop, and maintain full-stack applications with responsive web and/or cross platform mobile interfaces
- Build and optimize frontend components using React and/or Flutter with a focus on performance and usability
- Develop backend services and APIs using Java (Spring / Spring Boot preferred)
- Integrate frontend applications with backend services via RESTful APIs
- Write clean, well structured, and testable code following best practices
- Participate in architecture discussions, code reviews, and technical decision-making
- Debug, troubleshoot, and resolve application issues across the stack
- Collaborate closely with designers, product managers, and other engineers
- Support deployments and work with DevOps pipelines where required
Required Skills & Experience
- Minimum 3 years of professional experience as a Full Stack Developer
- Strong experience with React and/or Flutter
- Solid backend development experience using Java
- Experience building REST APIs and integrating frontend with backend services
- Working knowledge of HTML, CSS, JavaScript, and modern frontend tooling
- Familiarity with relational databases (PostgreSQL/MySQL) and basic query optimization
- Experience with Git and collaborative development workflows
- Understanding of application security, authentication, and authorization concepts
Preferred Skills
- Experience with Spring Boot, Hibernate/JPA
- Exposure to Node.js or Angular
- Experience with cloud platforms (AWS preferred)
- Familiarity with CI/CD pipelines and containerization (Docker)
- Experience building offline capable or mobile first applications
- Prior work in enterprise or product based environments
Soft Skills
- Strong problem solving and analytical abilities
- Good communication skills and ability to work in cross functional teams
- Ownership mindset with attention to code quality and maintainability
- Ability to adapt quickly in a fast paced development environment
Experience Level
- 3+ years of relevant full stack development experience
Educational Qualification
- Bachelor’s degree in Computer Science, Information Technology, or a related field
- A Master’s degree or equivalent exper
To design, automate, and manage scalable cloud infrastructure that powers real-time AI and communication workloads globally.
Key Responsibilities
- Implement and mange CI/CD pipelines (GitHub Actions, Jenkins, or GitLab).
- Manage Kubernetes/EKS clusters
- Implement infrastructure as code (provisioning via Terraform, CloudFormation, Pulumi etc).
- Implement observability (Grafana, Loki, Prometheus, ELK/CloudWatch).
- Enforce security/compliance guardrails (GDPR, DPDP, ISO 27001, PCI, HIPPA).
- Drive cost-optimization and zero-downtime deployment strategies.
- Collaborate with developers to containerize and deploy services.
Required Skills & Experience
- 4–8 years in DevOps or Cloud Infrastructure roles.
- Proficiency with AWS (EKS, Lambda, API Gateway, S3, IAM).
- Experience with infrastructure-as-code and CI/CD automation.
- Familiarity with monitoring, alerting, and incident management.
What Success Looks Like
- < 10 min build-to-deploy cycle.
- 99.999 % uptime with proactive incident response.
- Documented and repeatable DevOps workflows.
We are looking for a hands-on PostgreSQL Lead / Senior DBA (L3) to join our production engineering team. This is not an architect role. The focus is on deep PostgreSQL expertise, real-world production ownership, and mentoring junior DBAs within an existing database ecosystem.
You will work as a senior individual contributor with technical leadership responsibilities, operating in a live, high-availability environment with guidance and support from a senior team.
Key Responsibilities
- Own and manage PostgreSQL databases in production environments
- Perform PostgreSQL installation, upgrades, migrations, and configuration
- Handle L2/L3 production incidents, root cause analysis, and performance bottlenecks
- Execute performance tuning and query optimization
- Manage backup, recovery, replication, HA, and failover strategies
- Support re-architecture and optimization initiatives led by senior stakeholders
- Monitor database health, capacity, and reliability proactively
- Collaborate with application, infra, and DevOps teams
- Mentor and guide L1/L2 DBAs as part of the L3 role
- Demonstrate ownership during night/weekend production issues (comp-offs provided)
Must-Have Skills (Non-Negotiable)
- Very strong PostgreSQL expertise
- Deep understanding of PostgreSQL internals and behavior
- Proven experience with:
- Performance tuning & optimization
- Production troubleshooting (L2/L3)
- Backup & recovery
- Replication & High Availability
- Ability to work independently in critical production scenarios
- PostgreSQL-focused profiles are absolutely acceptable (no requirement to know other DBs)
Good-to-Have (Not Mandatory)
- Exposure to AWS and/or Azure
- Experience with cloud-managed or self-hosted Postgres
- Knowledge of other databases (Oracle, MS SQL, DB2, ClickHouse, Neo4j, etc.) — purely a plus
Note: Strong on-prem PostgreSQL DBAs are welcome. Cloud gaps can be trained post-joining.
Work Model & Availability (Important – Please Read Carefully)
- Work From Office only (Bangalore – Koramangala)
- Regular day shift, but with a 24×7 production ownership mindset
- Availability for night/weekend troubleshooting when required
- No rigid shifts; expectation is responsible lead-level ownership
- Comp-offs provided for off-hours work
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
Job Details
- Job Title: Lead I - Data Engineering
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 6-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
Job Title: Senior Data Engineer (Kafka & AWS)
Responsibilities:
- Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
- Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
- Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
- Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
- Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
- Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
- Uphold data security, governance, and compliance standards across all data operations.
Requirements:
- Minimum of 5 years of experience in Data Engineering or related roles.
- Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
- Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
- Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
- Excellent problem-solving, communication, and collaboration skills.
- Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Must-Haves
Minimum of 5 years of experience in Data Engineering or related roles.
Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
Excellent problem-solving, communication, and collaboration skills.
Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Notice period - 0 to 15days only
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
What You’ll Do:
We are looking for a Staff Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams.
- Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
- Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
- Build “mastered” versions of the data for Analytics-specific querying use cases.
- Help with data ETL, table performance optimization.
- Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent
- Build & operate scalable and robust data architectures.
- Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
- Implement DataOps practices.
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
- Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
- Operate between Engineers and Analysts to unify both practices for analytics insight creation.
Who You Are:
- 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
- Adept in market research methodologies and using data to deliver representative insights.
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
- Deep SQL experience is a must.
- Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
- Proficient with SQL, Python or JVM-based language, Bash.
- Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
What you'll be doing:
As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.
- Design & Implement: Lead the design, implementation and management of Trential’s products.
- Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
- Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
- Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
- Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
- 3+ years of experience in backend development.
- Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
- Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
- Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
- Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
- Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
- Experience integrating AI/ML models into verification or data extraction workflows.
Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.
We are looking for a Software Engineer with strong hands-on development experience to help build, enhance, and maintain our technology platform. The role involves owning the end-to-end software development lifecycle and delivering scalable, high-quality applications aligned with business needs.
Key Responsibilities:
Technology & Development
- Design, develop, test, deploy, and maintain scalable software applications.
- Work across a multi-stack environment, including:
- Frontend: HTML, CSS, JavaScript, TypeScript
- Backend: Node.js, Express.js
- Database: MongoDB
- Architecture: Microservices
- Cloud: AWS
- Develop and enhance information systems by analyzing requirements, designing solutions, and implementing robust code.
- Ensure adherence to software development lifecycle (SDLC) best practices.
System Analysis & Solution Design
- Evaluate operational feasibility through problem analysis, requirement gathering, and solution design.
- Study system flows, data usage, and business processes to deliver effective solutions.
- Identify problem areas and propose improvements to system design and performance.
- Prepare system specifications, standards, and programming guidelines.
Documentation & Quality
- Create clear technical documentation including flowcharts, diagrams, layouts, and code comments.
- Demonstrate solutions effectively to stakeholders.
- Maintain high code quality, readability, and maintainability.
Operations & Maintenance
- Install, configure, and maintain software solutions across multiple application, web, and database servers.
- Conduct system analysis to improve operations and recommend changes to processes and policies.
- Support licensing, evaluation, testing, and approval of third-party software products.
- Ensure data security and confidentiality across all systems.
Architecture, Security & Integrations
- Work with SOA-based architectures and web services.
- Apply strong knowledge of web security best practices, including protection against XSS, CSRF, SQL Injection, etc.
- Contribute to systems involving microservices architecture, with exposure to:
- Basic blockchain protocols
- Peer-to-peer (P2P) networks
Required Experience & Skills
- Strong experience in full-stack or backend-heavy development using Node.js and modern JavaScript/TypeScript.
- Hands-on experience with MongoDB and distributed systems.
- Experience deploying and managing applications on AWS.
- Familiarity with microservices-based architectures and service-oriented design.
- Experience working with multiple application, web, and database servers.
Nice to Have
- Exposure to blockchain technologies and decentralized systems.
- Prior product development experience in the Supply Chain / Logistics domain.
Hiring for Full Stack with Agentic AI
Exp : 5 - 10 yrs
Work Location : Mumbai Vikroholi
Hybrid
Skills :
5+ yrs in full-stack development with demonstrated technical leadership.
• Backend: Node.js (Express/Nest.js), Java (Spring Boot / Micronaut).
• Frontend: React, TypeScript, HTML5, CSS3.
• Database: MySQL / MongoDB / Graph DB and familiarity with ORM frameworks.
• Deep understanding of microservices, RESTful APIs, and event-driven architectures.
• Familiarity with cloud platforms (AWS).
• Experience with WebSocket, HTTP, and similar communication protocols.
• Experience in CI/CD pipelines (GitHub Actions, Jenkins, etc.) and infrastructure-as-code concepts. • Excellent problem-solving, debugging, and communication skills.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.
Senior Staff Engineer will play a critical role in shaping the technical direction and long-term architecture of the Albert platform. This role is responsible for driving scalable, reliable, and high- impact software engineering that align with business goals and customer needs. The position requires a strong balance of technical depth, execution excellence, and cross-functional leadership to accelerate product development while maintaining high standards of quality, performance, and maintainability
Responsibilities:
Technical Leadership
- Drive the architectural vision for core product areas across the Albert platform.
- Own the technical roadmap for major product features, ensuring alignment with business priorities and long-term platform evolution.
- Lead the design and development of highly reliable, performant, and scalable applications using modern tech stack.
- Establish durable engineering patterns and frameworks that enable product teams to move quickly with high confidence.
- Provide mentorship to Staff, Senior, and Mid-level engineers to uplevel engineering capabilities across product teams
Execution Excellence
- Translate business goals and customer needs into scalable technical designs that accelerate product development.
- Solve complex, multi-system issues and guide teams through debugging, incident response, and performance improvements.
- Lead design reviews, define coding standards, and elevate system observability, reliability, and maintainability.
- Drive technical decisions involving tradeoffs between speed, quality, and scalability, bringing clarity to ambiguity.
- Identify, prioritise, and drive down technical debt that impacts product velocity and quality
Cross-Team Influence & Collaboration
- Work with senior technical leadership to establish and uphold company-wide architectural standards and engineering practices.
- Partner closely with PMs to shape feature requirements, estimate complexity, and define engineering milestones.
- Collaborate with engineering, data, ML, and infra teams to develop cohesive, well-integrated product experiences
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 12+ years of software engineering experience, with 3+ years in senior technical leadership roles supporting product-oriented teams.
- Proven ability to lead end-to-end product development at scale — from concept through production rollout.
- Deep expertise in modern backend technologies, including Node.js, RESTful API design, backend services, and distributed system fundamentals, with strong proficiency across multiple programming languages.
- Strong understanding of product architecture patterns: domain-driven design, modular monoliths, micro-services, event-driven systems.
- Proficiency with SQL & NoSQL databases (PostgreSQL, DynamoDB, MongoDB, etc.).
- Significant experience with AWS services and modern cloud architectures.
- Strong product intuition — ability to understand user needs, evaluate tradeoffs, and craft solutions that balance speed with quality.
- Outstanding communication, collaboration, and organisational influence skills
Good to Have:
- Experience with modern front-end frameworks such as React.
- Experience building AI- or ML-driven user experiences.
- Experience scaling a product engineering team from 1 to N
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
About The Role
- As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
- Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
- You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
- Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
- Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.
Key Responsibilities
- Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
- Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
- Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
- Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
- Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
- Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
- Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
- Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
- Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
- Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.
Qualifications
- Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
- Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
- Experience working in mid size IT Services company is preferred
Preferred Certification
- AWS Certified Data Analytics Specialty
- AWS Solution Architect Professional
- Azure Data Engineer + Solution Architect
- Databricks Certified Data Engineer / ML Professional
Technical Expertise
- Advanced knowledge of distributed architectures and data modeling practices.
- Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
- Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
- Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
- Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
- Proficient programming skills in Python, Scala, Go, or Java.
- Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
- Experience working with multi-cloud environments, particularly AWS and Azure.
- Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.
As Senior Backend developer, you will play a key role in building a product that will impact the way users experience Yoga and Fitness. Working closely with our technical and product leadership you will help solve for securing the performance, experience and scalability of our product. With your erudite experience, you will play a key part in our product and growth roadmap. We’re looking for an engineer who not only writes high-quality backend code but also embodies a forward-thinking, AI-augmented development mindset. You should be someone who embraces AI and automation as a force multiplier—leveraging modern AI tools to accelerate delivery, increase code quality, and focus your time on higher-order problems.
Responsibilities
- At least 3 years of experience in product development and backend technologies, with strong understanding of the technology and familiarity with latest trends in backend technology developments.
- Design, develop, and maintain scalable backend services and APIs, ensuring high performance and reliability.
- Lead the architecture and implementation of new features, driving projects from concept to deployment.
- Optimize application performance and ensure high availability across systems.
- Implement robust security and data protection measures to safeguard critical information.
- Contribute to technical decision-making and architectural planning, ensuring long-term scalability and efficiency.
- Create and maintain clear, concise technical documentation for new systems, architectures, and codebases.
- Lead knowledge-sharing sessions to promote best practices across teams.
- Work closely with product managers, front-end developers, and other stakeholders to define requirements, design systems, and deliver impactful product features within reasonable timelines.
- Continuously identify opportunities for system improvements, automation, and optimizations.
- Lead efforts to implement new technologies and processes that enhance engineering productivity and product performance.
- Take ownership of critical incidents, performing root cause analysis and implementing long-term solutions to minimize downtime and ensure business continuity.
- Ability to communicate clearly and effectively at various levels - intra-team, inter-group, spoken skills, and written skills - including email, presentation and articulation skills.
- Has strong knowledge of AI-assisted development tools, has hands-on experience on reducing boilerplate coding, identifying bugs faster, and optimizing system design.
Qualifications
- 2+ years of strong experience developing services in Go language
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- 3+ years of experience in backend software engineering with a strong track record of delivering complex backend systems, preferably in cloud-native environments.
- Strong experience with designing and maintaining large-scale databases (SQL and NoSQL) and knowledge of performance optimization techniques.
- Hands-on experience with cloud platforms (AWS, GCP, Azure) and cloud-native architectures (containers, serverless, microservices) is highly desirable.
- Familiarity with modern software development practices, including CI/CD, test automation, and Agile methodologies.
- Proven ability to solve complex engineering problems with innovative solutions and practical thinking.
- Strong leadership and interpersonal skills, with the ability to work cross-functionally and influence technical direction across teams.
- Excellent communication skills, with the ability to communicate complex technical ideas to both technical and non-technical stakeholders.
- Demonstrated ability to boost engineering output through strategic use of AI tools and practices—contributing to a “10x developer” mindset focused on outcomes, not just effort.
- Comfortable working in a fast-paced, high-leverage environment where embracing automation and AI-first workflows is part of the engineering culture.
Required Skills
- Strong experience in Go language
- Strong experience in backend technologies and cloud-native environments.
- Proficiency in designing and maintaining large-scale databases.
- Strong problem-solving skills and familiarity with modern software development practices.
- Excellent communication skills.
Preferred Skills
- Experience with AI-assisted development tools.
- Knowledge of performance optimization techniques.
- Experience in Agile methodologies.
About the Company
MyYogaTeacher is a fast-growing health tech startup with a mission to improve the physical and mental well-being of the entire planet. We are the first online marketplace to connect qualified Fitness and Yoga coaches from India with consumers worldwide to provide personalized 1-on-1 sessions via live video conference (app, web). We started in 2019 and have been showing tremendous traction with rave customer reviews.
- Over 200,000 happy customers
- Over 335,000 5 star reviews
- Over 150 Highly qualified coaches on the platform
- 95% of sessions are being completed with 5-star rating
Headquartered in California, with operations based in Bangalore, we are dedicated to providing exceptional service and promoting the benefits of yoga and fitness coaching worldwide
Role Description
This is a full-time on-site role in Bengaluru for a Full Stack Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.
Qualifications
- Back-End Web Development and Full-Stack Development skills
- Front-End Development and Software Development skills
- Proficiency in Cascading Style Sheets (CSS)
- Experience with Python, Django, and Flask frameworks
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
- Front end - React.js
- Data Engineering: Useful experience blending data engineering with core software engineering.
- Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
- CI/CD Tools: Familiarity with Github Actions is a plus.
- Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
- Code Optimization: Proficient in profiling and optimizing Python code.
Profile: Senior Data Engineer (Informatica MDM)
Primary Purpose:
The Senior Data Engineer will be responsible for building new segments in a Customer Data Platform (CDP), maintaining the segments, understanding the data requirements for use cases, data integrity, data quality and data sources involved to build the specific use cases. The resource should also have an understanding of ETL processes. This position will have an understanding of integrations with cloud service providers like Microsoft Azure, Azure Data Lake Services, Azure Data Factory and cloud data warehouse platforms in addition to Enterprise Data Ware house environments. The ideal candidate will also have proven experience in data analysis and management, with excellent analytical and problem-solving abilities.
Major Functions/Responsibilities
• Design, develop and implement robust and extensible solutions to build segmentations using Customer Data Platform.
• Work closely with subject matter experts to identify and document based on the business requirements, functional specs and translate them into appropriate technical solutions.
• Responsible for estimating, planning, and managing the user stories, tasks and reports on Agile Projects.
• Develop advanced SQL Procedures, Functions and SQL jobs.
• Performance tuning and optimization of ETL Jobs, SQL Queries and Scripts.
• Configure and maintain scheduled ETL jobs, data segments and refresh.
• Support exploratory data analysis, statistical analysis, and predictive analytics.
• Support production issues and maintain existing data systems by researching and trouble shooting any issues/problems in a timely manner.
• Proactive, great attention to detail, results-oriented problem solver.
Preferred Experience
• 6+ years of experience in writing SQL queries and stored procedures to extract, manipulate and load data.
• 6+ years’ experience with design, build, test, and maintain data integrations for data marts and data warehouses.
• 3+ years of experience in integrations Azure / AWS Data Lakes, Azure Data Factory & IDMC (Informatica Cloud Services).
• In depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework.
• Excellent verbal and written communication skills
• Collaboration with both onshore and offshore development teams.
• Good Understanding of Marketing tools like Sales Force Marketing cloud, Adobe Marketing or Microsoft Customer Insights Journey and Customer Data Platform will be important to this role. Communication
• Facilitate project team meetings effectively.
• Effectively communicate relevant project information to superiors
• Deliver engaging, informative, well-organized presentations that are effectively tailored to the intended audience.
• Serve as a technical liaison with development partner.
• Serve as a communication bridge between applications team, developers and infrastructure team members to facilitate understanding of current systems
• Resolve and/or escalate issues in a timely fashion.
• Understand how to communicate difficult/sensitive information tactfully.
• Works under the direction of Technical Data Lead / Data architect. Education
• Bachelor’s Degree or higher in Engineering, Technology or related field experience required.
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
We are looking for a skilled Node.js Developer with PHP experience to build, enhance, and maintain ERP and EdTech platforms. The role involves developing scalable backend services, integrating ERP modules, and supporting education-focused systems such as LMS, student management, exams, and fee management.
Key Responsibilities
Develop and maintain backend services using Node.js and PHP.
Build and integrate ERP modules for EdTech platforms (Admissions, Students, Exams, Attendance, Fees, Reports).
Design and consume RESTful APIs and third-party integrations (payment gateway, SMS, email).
Work with databases (MySQL / MongoDB / PostgreSQL) for high-volume education data.
Optimize application performance, scalability, and security.
Collaborate with frontend, QA, and product teams.
Debug, troubleshoot, and provide production support.
Required Skills
Strong experience in Node.js (Express.js / NestJS).
Working experience in PHP (Core PHP / Laravel / CodeIgniter).
Hands-on experience with ERP systems.
Domain experience in EdTech / Education ERP / LMS.
Strong knowledge of MySQL and database design.
Experience with authentication, role-based access, and reporting.
Familiarity with Git, APIs, and server environments.
Preferred Skills
Experience with online examination systems.
Knowledge of cloud platforms (AWS / Azure).
Understanding of security best practices (CSRF, XSS, SQL Injection).
Exposure to microservices or modular architecture.
Qualification
Bachelor’s degree in Computer Science or equivalent experience.
3–6 years of relevant experience in Node.js & PHP development
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA

Global digital transformation solutions provider.
Job Description
We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.
Key Responsibilities:
- Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
- Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
- Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
- Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
- Ensure compliance with security best practices and organizational policies across GCP environments.
- Document processes, configurations, and architectural decisions to maintain operational transparency.
- Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.
Mandatory Skills:
- Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
- Jenkins – Expertise in Declarative Pipeline creation and optimization.
- CI/CD – Strong understanding of automated build, test, and deployment workflows.
- Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
- Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
- Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.
Preferred Skills:
- Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
- Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
- Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
- GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
Skills
Gcp, Jenkins, CICD Aws,
Nice to Haves
Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.
******
Notice period - 0 to 15days only
Location – Pune, Trivandrum, Kochi, Chennai
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Backend Developer (Django)
About the Role:
We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.
Key Responsibilities:
- Develop and maintain Python-based web applications using Django and Django Rest Framework.
- Build and integrate RESTful APIs.
- Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
- Contribute to improving development workflows through automation.
- Assist in deploying applications using cloud platforms like Heroku or AWS.
- Write clean, maintainable, and efficient code.
Requirements:
Backend:
- Strong understanding of Django and Django Rest Framework (DRF).
- Experience with task queues like Celery.
Frontend (Basic Understanding):
- Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.
Hosting & Deployment:
- Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.
Linux/Server Knowledge:
- Basic to intermediate understanding of Linux commands and server environments.
- Ability to work with terminal, virtual environments, SSH, and basic server configurations.
Python Knowledge:
- Good grasp of OOP concepts.
- Familiarity with Pandas for data manipulation is a plus.
Soft & Team Skills:
- Strong collaboration and team management abilities.
- Ability to work in a team-driven environment and coordinate tasks smoothly.
- Problem-solving mindset and attention to detail.
- Good communication skills and eagerness to learn
What We Offer:
- A collaborative, friendly, and growth-focused work environment.
- Opportunity to work on real-time projects using modern technologies.
- Guidance and mentorship to help you advance in your career.
- Flexible and supportive work culture.
- Opportunities for continuous learning and skill development.
Location : Bhayander (Onsite)
Immediate to 30-day joiner and Mumbai-based candidate preferred.
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
About the company:
Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.
About the role:
As a Technical Project Manager, you will lead the planning, execution, and delivery of complex technical projects while ensuring alignment with business objectives and timelines. You will act as a bridge between technical teams and stakeholders, managing resources, risks, and communications to deliver high-quality solutions. This role demands strong leadership, project management expertise, and technical acumen to drive project success in a dynamic and collaborative environment.
Qualifications:
- Education Background: Any ME / M Tech / BE / B Tech
Key Competencies:
Technical Skills
1. Data & BI Technologies-
- Proficiency in SQL & PL/SQL for database querying and optimization.
- Understanding of data warehousing concepts, dimensional modeling, and data lake/lakehouse architectures.
- Experience with BI tools such as Power BI, Tableau, Qlik Sense/View.
- Familiarity with traditional platforms like Oracle, Informatica, SAP BO, BODS, BW.
2. Cloud & Data Engineering :
- Strong knowledge of AWS (EC2, S3, Lambda, Glue, Redshift), Azure (Data Factory, Synapse, Databricks, ADLS),
- Snowflake (warehouse architecture, performance tuning), and Databricks (Delta Lake, Spark).
- Experience with cloud-based ETL/ELT pipelines, data ingestion, orchestration, and workflow automation.
3. Programming
- Hands-on experience in Python or similar scripting languages for data processing and automation.
Soft Skills
- Strong leadership and team management skills.
- Excellent verbal and written communication for stakeholder alignment.
- Structured problem-solving and decision-making capability.
- Ability to manage ambiguity and handle multiple priorities.
Tools & Platforms
- Cloud: AWS, Azure
- Data Platforms: Snowflake, Databricks
- BI Tools: Power BI, Tableau, Qlik
- Data Management: Oracle, Informatica, SAP BO
- Project Tools: JIRA, MS Project, Confluence (recommended additions if you want)
Key Responsibilities:
- End-to-End Project Management: Lead the team through the full project lifecycle, delivering techno-functional solutions.
- Methodology Expertise: Apply Agile, PMP, and other frameworks to ensure effective project execution and resource management.
- Technology Integration: Oversee technology integration and ensure alignment with business goals.
- Stakeholder & Conflict Management: Manage relationships with customers, partners, and vendors, addressing expectations and conflicts proactively.
- Technical Guidance: Provide expertise in software design, architecture, and ensure project feasibility.
- Change Management: Analyse new requirements/change requests, ensuring alignment with project goals.
- Effort & Cost Estimation: Estimate project efforts and costs and identify potential risks early.
- Risk Mitigation: Proactively identify risks and develop mitigation strategies, escalating issues in advance.
- Hands-On Contribution: Participate in coding, code reviews, testing, and documentation as needed.
- Project Planning & Monitoring: Develop detailed project plans, track progress, and monitor task dependencies.
- Scope Management: Manage project scope, deliverables, and exclusions, ensuring technical feasibility.
- Effective Communication: Communicate with stakeholders to ensure agreement on scope, timelines, and objectives.
- Reporting: Provide status and RAG reports, proactively addressing risks and issues.
- Change Control: Manage changes in project scope, schedule, and costs using appropriate verification techniques.
- Performance Measurement: Measure project performance with tools and techniques to ensure progress.
- Operational Process Management: Oversee operational tasks like timesheet approvals, leave, appraisals, and invoicing.
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
What You’ll Do:
- Setting up formal data practices for the company.
- Building and running super stable and scalable data architectures.
- Making it easy for folks to add and use new data with self-service pipelines.
- Getting DataOps practices in place.
- Designing, developing, and running data pipelines to help out Products, Analytics, data scientists and machine learning engineers.
- Creating simple, reliable data storage, ingestion, and transformation solutions that are a breeze to deploy and manage.
- Writing and Managing reporting API for different products.
- Implementing different methodologies for different reporting needs.
- Teaming up with all sorts of people – business folks, other software engineers, machine learning engineers, and analysts.
Who You Are:
- Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
- 3.5+ years of experience in building and running data pipelines for tons of data.
- Experience with public clouds like GCP or AWS.
- Experience with Apache open-source projects like Spark, Druid, Airflow, and big data databases like BigQuery, Clickhouse.
- Experience making data architectures that are optimised for both performance and cost.
- Good grasp of software engineering, DataOps, data architecture, Agile, and DevOps.
- Proficient in SQL, Java, Spring Boot, Python, and Bash.
- Good communication skills for working with technical and non-technical people.
- Someone who thinks big, takes chances, innovates, dives deep, gets things done, hires and develops the best, and is always learning and curious.
Job Description: Business Analyst – Data Integrations
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We’re looking for a skilled Business Analyst – Data Integrations who can bridge the gap
between business operations and technology teams, ensuring smooth, efficient, and scalable
integrations. If you’re passionate about hospitality tech and enjoy solving complex data
challenges, we’d love to hear from you!
What You’ll Do
Key Responsibilities
Collaborate with vendors to gather requirements for API development and ensure
technical feasibility.
Collect API documentation from vendors; document and explain business logic to
use external data sources effectively.
Access vendor applications to create and validate sample data; ensure the accuracy
and relevance of test datasets.
Translate complex business logic into documentation for developers, ensuring
clarity for successful integration.
Monitor all integration activities and support tickets in Jira, proactively resolving
critical issues.
Lead QA testing for integrations, overseeing pilot onboarding and ensuring solution
viability before broader rollout.
Document onboarding processes and best practices to streamline future
integrations and improve efficiency.
Build, train, and deploy machine learning models for forecasting, pricing, and
optimization, supporting strategic goals.
Drive end-to-end execution of data integration projects, including scoping, planning,
delivery, and stakeholder communication.
Gather and translate business requirements into actionable technical specifications,
liaising with business and technical teams.
Oversee maintenance and enhancement of existing integrations, performing RCA
and resolving integration-related issues.
Document workflows, processes, and best practices for current and future
integration projects.
Continuously monitor system performance and scalability, recommending
improvements to increase efficiency.
Coordinate closely with Operations for onboarding and support, ensuring seamless
handover and issue resolution.
Desired Skills & Qualifications
Strong experience in API integration, data analysis, and documentation.
Familiarity with Jira for ticket management and project workflow.
Hands-on experience with machine learning model development and deployment.
Excellent communication skills for requirement gathering and stakeholder
engagement.
Experience with QA test processes and pilot rollouts.
Proficiency in project management, data workflow documentation, and system
monitoring.
Ability to manage multiple integrations simultaneously and work cross-functionally.
Required Qualifications
Experience: Minimum 4 years in hotel technology or business analytics, preferably
handling data integration or system interoperability projects.
Technical Skills:
Basic proficiency in SQL or database querying.
Familiarity with data integration concepts such as APIs or ETL workflows
(preferred but not mandatory).
Eagerness to learn and adapt to new tools, platforms, and technologies.
Hotel Technology Expertise: Understanding of systems such as PMS, CRS, Channel
Managers, or RMS.
Project Management: Strong organizational and multitasking abilities.
Problem Solving: Analytical thinker capable of troubleshooting and driving resolution.
Communication: Excellent written and verbal skills to bridge technical and non-
technical discussions.
Attention to Detail: Methodical approach to documentation, testing, and deployment.
Preferred Qualification
Exposure to debugging tools and troubleshooting methodologies.
Familiarity with cloud environments (AWS).
Understanding of data security and privacy considerations in the hospitality industry.
Why LodgIQ?
Join a fast-growing, mission-driven company transforming the future of hospitality.
Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
Competitive salary and performance bonuses.
For more information, visit https://www.lodgiq.com

Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
- Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
- Configure and maintain Docker containers and/or Kubernetes clusters.
- Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
- Automate build, deployment, and monitoring processes.
- Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
- Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
- Ensure system scalability, reliability, and security.
- Troubleshoot production issues and perform root-cause analysis.
- Collaborate with engineering teams to improve deployment and development workflows.
- Optimize infrastructure costs and improve performance.
Required Skills & Qualifications
- 3+ years of experience in DevOps, SRE, or Cloud Engineering.
- Strong hands-on knowledge of AWS cloud services.
- Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
- Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
- Experience with Linux administration and shell scripting.
- Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
- Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
- Experience with Terraform or CloudFormation (IaC).
- Good understanding of Node.js or similar application deployments.
- Knowledge of NGINX/Apache and load balancing concepts.
- Strong problem-solving and communication skills.
Preferred/Good to Have
- Experience with Kubernetes (EKS).
- Experience with Serverless architectures (Lambda).
- Experience with Redis, MongoDB, RDS.
- Certification in AWS Solutions Architect / DevOps Engineer.
- Experience with security best practices, IAM policies, and DevSecOps.
- Understanding of cost optimization and cloud cost management.
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About Phi Commerce
Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.
Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.
Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.
Awards & Recognitions:
The company innovative work has been recognized at prestigious forums in short span of its existence:
- Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
- Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
- Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
- Winner of NPCI IDEATHON on Blockchain in Payments
- Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors
About the role:
We are seeking an experienced and dynamic QA Manager to lead our quality assurance team in delivering high-quality software products for our organization. The ideal candidate will have a strong background in manual and automation testing, with hands-on experience in SQL, UNIX commands, STLC/SDLC, and managing QA for critical financial systems. You will be responsible for test strategy creation, resource planning, stakeholder communication, and ensuring process adherence to deliver robust and secure systems.
Key Responsibilities:
Team & Test Management
- Lead and manage a team of manual and automation testers, providing guidance, mentorship, and performance feedback.
- Define and execute test strategies and plans for each product release in alignment with business goals and timelines.
- Oversee test case design, execution, and test data management to ensure full coverage across all functionalities.
- Plan and manage QA deliverables in coordination with release and sprint planning.
Process & Quality Oversight
- Ensure compliance with STLC, SDLC, and Defect Management processes.
- Maintain and manage QA environments, ensuring they are up-t-date and aligned with production-like conditions.
- Implement best practices and continuously improve QA processes for efficiency and quality
Stakeholder & Communication Management
- Serve as a primary point of contact for all QA-related updates across internal teams and external partners.
- Provide regular DSR (Daily Status Reports) and WSR (Weekly Status Reports) to stakeholders.
- Communicate effectively with both technical and non-technical stakeholders regarding quality issues, risks, and expectations.
Technical Responsibilities
- Work with SQL for data validation and backend testing.
- Use UNIX commands for system checks, log analysis, and troubleshooting.
- Collaborate closely with developers, product managers, and release engineers to ensure high-quality deliverables.
Required Skills & Experience:
Technical Skills:
- Strong hands-on experience with SQL and UNIX/Linux commands.
- Proficient in manual test case creation and automation testing processes.
- Good understanding of QA tools like JIRA, TestRail, Confluence, and defect tracking systems.
- Knowledge of test automation frameworks and scripting languages (optional but a plus).
Domain Expertise:
- Solid understanding of payment systems, including ATM, E-commerce transactions, settlement, and reconciliation workflows.
- Experience in testing APIs, transaction flows, chargebacks, refunds, and financial reporting systems.
Leadership & Soft Skills:
- Proven experience in leading QA teams and managing test resources effectively.
- Strong analytical and problem-solving skills to identify root causes of defects and quality issues.
- Excellent communication and interpersonal skills for effective collaboration across teams and stakeholders.
Qualifications:
- 10+ years of total QA experience with at least 2 years in a QA leadership/ managerial role.
- Experience in fintech, banking, or payment processing environments is strongly preferred
The Production Infrastructure Manager is responsible for overseeing and maintaining the infrastructure that powers our payment gateway systems in a high-availability production environment. This role requires deep technical expertise in cloud platforms, networking, and security, along with strong leadership capability to guide a team of infrastructure engineers. You will ensure the system’s reliability, performance, and compliance with regulatory standards while driving continuous improvement.
Key Responsibilities:
Infrastructure Management
- Manage and optimize infrastructure for payment gateway systems to ensure high availability, reliability, and scalability.
- Oversee daily operations of production environments, including AWS cloud services, load balancers, databases, and monitoring systems.
- Implement and maintain infrastructure automation, provisioning, configuration management, and disaster recovery strategies.
- Develop and maintain capacity planning, monitoring, and backup mechanisms to support peak transaction periods.
- Oversee regular patching, updates, and version control to minimize vulnerabilities.
Team Leadership
- Lead and mentor a team of infrastructure engineers and administrators.
- Provide technical direction to ensure efficient and effective implementation of infrastructure solutions.
Cross-Functional Collaboration
- Work closely with development, security, and product teams to ensure infrastructure aligns with business needs and regulatory requirements (PCI-DSS, GDPR).
- Ensure infrastructure practices meet industry standards and security requirements (PCI-DSS, ISO 27001).
Monitoring & Incident Management
- Monitor infrastructure performance using tools like Prometheus, Grafana, Datadog, etc.
- Conduct incident response, root cause analysis, and post-mortems to prevent recurring issues.
- Manage and execute on-call duties, ensuring timely resolution of infrastructure-related issues.
Documentation
- Maintain comprehensive documentation, including architecture diagrams, processes, and disaster recovery plans.
Skills and Qualifications
Required
- Bachelor’s degree in Computer Science, IT, or equivalent experience.
- 8+ years of experience managing production infrastructure in high-availability, mission-critical environments (fintech or payment gateways preferred).
- Expertise in AWS cloud environments.
- Strong experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
- Deep understanding of:
- Networking (load balancers, firewalls, VPNs, distributed systems)
- Database systems (SQL/NoSQL), HA & DR strategies
- Automation tools (Ansible, Chef, Puppet) and containerization/orchestration (Docker, Kubernetes)
- Security best practices, encryption, vulnerability management, PCI-DSS compliance
- Experience with monitoring tools (Prometheus, Grafana, Datadog).
- Strong analytical and problem-solving skills.
- Excellent communication and leadership capabilities.
Preferred
- Experience in fintech/payment industry with regulatory exposure.
- Ability to operate effectively under pressure and ensure service continuity.
Quality Engineer is responsible for planning, developing, and executing tests for CFRA’s financial software. The responsibilities include designing and implementing tests, debugging and defining corrective actions. The role plays an important part in our company’s product development process. Our ideal candidate will be responsible for conducting tests to ensure software runs efficiently and meets client needs, while at the same time being cost-effective. You will be part of CFRA Data Collection Team responsible for collecting, processing and publishing financial market data for internal and external stakeholders. The team uses a contemporary stack in the AWS Cloud to design, build and maintain a robust data architecture, data engineering pipelines, and large-scale data systems. You will be responsible for verifying and validating all data quality and completeness parameters for the automated (ETL) pipeline processes (new and existing).
Key Responsibilities
- Review requirements, specifications and technical design documents to provide timely and meaningful feedback
- Create detailed, comprehensive and well-structured test plans and test cases
- Estimate, prioritize, plan and coordinate testing activities
- Identify, record, document thoroughly and track bugs
- Develop and apply testing processes for new and existing products to meet client needs
- Liaise with internal teams to identify system requirements and develop testing plans
- Investigate the causes of non-conforming software and train users to implement solutions
- Stay up-to-date with new testing tools and test strategies
Desired Skills
- Proven work experience in software development and quality assurance
- Strong knowledge of software QA methodologies, tools and processes
- Experience in writing clear, concise and comprehensive test plans and test cases
- Hands-on experience with automated testing tools
- Acute attention to detail
- Experience working in an Agile/Scrum development process
- Excellent collaboration skills
Technical Skills
- Proficient with SQL, and capable of developing queries for testing
- Familiarity with Python, especially for scripting tests
- Familiarity with Cloud Technology and working with remote servers
The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.




























