50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)
Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.
Job Overview:
We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.
Qualifications and Requirements
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
- Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
- Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
- Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
- Experience working with Docker, Kubernetes, and CI/CD pipelines.
- Strong understanding of Terraform and Ansible for infrastructure automation.
- Scripting proficiency in Python and Bash (PowerShell optional).
- Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
- Experience with firewalls, basic security concepts, and tools like pfSense.
- Familiarity with Git/GitHub for version control and team collaboration.
- Ability to perform API testing using cURL and Postman.
- Strong understanding of the application deployment lifecycle and basic application deployment processes.
- Good problem-solving, analytical thinking, and documentation skills.
Roles and Responsibility
- Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
- Use Terraform and Ansible to provision, automate, and manage infrastructure components.
- Support application deployment lifecycle—from build and testing to release and rollout.
- Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
- Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
- Write automation scripts using Python/Bash to optimize recurring tasks.
- Conduct API testing using curl and Postman to validate integrations and service functionality.
- Configure and monitor firewalls including pfSense for secure access control.
- Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
- Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
- Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
As a Software Development Engineer II at Hiver, you will have a critical role to play to build and scale our product to thousands of users globally. We are growing very fast and process over 5 million emails daily for thousands of active users on the Hiver platform. You will get a chance to work with and mentor a group of smart engineers as well as learn and grow yourself working with very good mentors. You’ll get the opportunity to work on complex technical challenges such as making the architecture scalable to handle the growing traffic, building frameworks to monitor and improve the performance of our systems, and improving reliability and performance. Code, design, develop and maintain new product features. Improve the existing services for performance and scalability.
What will you be working on?
- Build a new API for our users, or iterate on existing APIs in monolith applications.
- Build event-driven architecture using highly scalable message brokers like Kafka, RabbitMQ, etc.
- Build microservices based on the performance and efficiency needs.
- Build frameworks to monitor and improve the performance of our systems.
- Build and upgrade systems to securely store sensitive data.
- Design, build and maintain APIs, services, and systems across Hiver's engineering teams.
- Debug production issues across services and multiple levels of the stack.
- Work with engineers across the company to build new features at large-scale.
- Improve engineering standards, tooling, and processes.
What we are looking for?
- Have worked in scaling backend systems over at least 3 years.
- Knowledge of Ruby on Rails (RoR) or Pythonis good to have, with hands-on experience in at least one project.
- Have worked on the microservice and event-driven architecture.
- Have worked on technologies like Kafka in building data pipelines with a high volume of data.
- Enjoy and have experience building Lean APIs and amazing backend services.
- Think about systems and services and write high-quality code. We care much more about your general engineering skill than - knowledge of a particular language or framework.
- Have worked extensively with SQL Databases and understand NoSQL databases and Caches.
- Have experience deploying applications on the cloud. We are on AWS, but experience with any cloud provider (GCP, Azure) would be great.
- Hold yourself and others to a high bar when working with production systems.
- Take pride in working on projects to successful completion involving a wide variety of technologies and systems.
- Thrive in a collaborative environment involving different stakeholders.
- Enjoy working with a diverse group of people with different expertise.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Required Skills
• Minimum 3+yrs of experience in Python is mandatory.
• You are responsible for growing your team
• You take ownership of the product/service you are writing
• You are able to write clean, pragmatic, and testable code
• Comfortable with basic Unix commands (+ Shell scripting)
• Very Proficient in Git & GitHub
• Have a GitHub & StackOverflow profile
• Proficient in writing test-first code (a.k.a Writing testable code)
• You have to stick to the timeline of the sprint
• Must have worked on AWS.
Skills: Python 3.5+, Django 2.0 or higher or Flask, ORM (Django-ORM, SQL Alchemy), Celery, Redis/RabbitMQ, Elastic Search/Solr, Django Rest Framework, Graph QL, Pandas, NumPy, SciPy, Linux OS, GIT, DevOps, Docker, AWS,
knowledge on front end technologies is good to have (HTML5, CSS3, SASS/LESS, Object-Oriented Javascript, TypeScript).
Knowledge of Machine Learning/AI Concepts and Keen interest/exposure/experience in other languages (Golang, Elixir, Rust) is a huge plus.
We are looking for a hands-on PostgreSQL Lead / Senior DBA (L3) to join our production engineering team. This is not an architect role. The focus is on deep PostgreSQL expertise, real-world production ownership, and mentoring junior DBAs within an existing database ecosystem.
You will work as a senior individual contributor with technical leadership responsibilities, operating in a live, high-availability environment with guidance and support from a senior team.
Key Responsibilities
- Own and manage PostgreSQL databases in production environments
- Perform PostgreSQL installation, upgrades, migrations, and configuration
- Handle L2/L3 production incidents, root cause analysis, and performance bottlenecks
- Execute performance tuning and query optimization
- Manage backup, recovery, replication, HA, and failover strategies
- Support re-architecture and optimization initiatives led by senior stakeholders
- Monitor database health, capacity, and reliability proactively
- Collaborate with application, infra, and DevOps teams
- Mentor and guide L1/L2 DBAs as part of the L3 role
- Demonstrate ownership during night/weekend production issues (comp-offs provided)
Must-Have Skills (Non-Negotiable)
- Very strong PostgreSQL expertise
- Deep understanding of PostgreSQL internals and behavior
- Proven experience with:
- Performance tuning & optimization
- Production troubleshooting (L2/L3)
- Backup & recovery
- Replication & High Availability
- Ability to work independently in critical production scenarios
- PostgreSQL-focused profiles are absolutely acceptable (no requirement to know other DBs)
Good-to-Have (Not Mandatory)
- Exposure to AWS and/or Azure
- Experience with cloud-managed or self-hosted Postgres
- Knowledge of other databases (Oracle, MS SQL, DB2, ClickHouse, Neo4j, etc.) — purely a plus
Note: Strong on-prem PostgreSQL DBAs are welcome. Cloud gaps can be trained post-joining.
Work Model & Availability (Important – Please Read Carefully)
- Work From Office only (Bangalore – Koramangala)
- Regular day shift, but with a 24×7 production ownership mindset
- Availability for night/weekend troubleshooting when required
- No rigid shifts; expectation is responsible lead-level ownership
- Comp-offs provided for off-hours work
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
As Senior Backend developer, you will play a key role in building a product that will impact the way users experience Yoga and Fitness. Working closely with our technical and product leadership you will help solve for securing the performance, experience and scalability of our product. With your erudite experience, you will play a key part in our product and growth roadmap. We’re looking for an engineer who not only writes high-quality backend code but also embodies a forward-thinking, AI-augmented development mindset. You should be someone who embraces AI and automation as a force multiplier—leveraging modern AI tools to accelerate delivery, increase code quality, and focus your time on higher-order problems.
Responsibilities
- At least 3 years of experience in product development and backend technologies, with strong understanding of the technology and familiarity with latest trends in backend technology developments.
- Design, develop, and maintain scalable backend services and APIs, ensuring high performance and reliability.
- Lead the architecture and implementation of new features, driving projects from concept to deployment.
- Optimize application performance and ensure high availability across systems.
- Implement robust security and data protection measures to safeguard critical information.
- Contribute to technical decision-making and architectural planning, ensuring long-term scalability and efficiency.
- Create and maintain clear, concise technical documentation for new systems, architectures, and codebases.
- Lead knowledge-sharing sessions to promote best practices across teams.
- Work closely with product managers, front-end developers, and other stakeholders to define requirements, design systems, and deliver impactful product features within reasonable timelines.
- Continuously identify opportunities for system improvements, automation, and optimizations.
- Lead efforts to implement new technologies and processes that enhance engineering productivity and product performance.
- Take ownership of critical incidents, performing root cause analysis and implementing long-term solutions to minimize downtime and ensure business continuity.
- Ability to communicate clearly and effectively at various levels - intra-team, inter-group, spoken skills, and written skills - including email, presentation and articulation skills.
- Has strong knowledge of AI-assisted development tools, has hands-on experience on reducing boilerplate coding, identifying bugs faster, and optimizing system design.
Qualifications
- 2+ years of strong experience developing services in Go language
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- 3+ years of experience in backend software engineering with a strong track record of delivering complex backend systems, preferably in cloud-native environments.
- Strong experience with designing and maintaining large-scale databases (SQL and NoSQL) and knowledge of performance optimization techniques.
- Hands-on experience with cloud platforms (AWS, GCP, Azure) and cloud-native architectures (containers, serverless, microservices) is highly desirable.
- Familiarity with modern software development practices, including CI/CD, test automation, and Agile methodologies.
- Proven ability to solve complex engineering problems with innovative solutions and practical thinking.
- Strong leadership and interpersonal skills, with the ability to work cross-functionally and influence technical direction across teams.
- Excellent communication skills, with the ability to communicate complex technical ideas to both technical and non-technical stakeholders.
- Demonstrated ability to boost engineering output through strategic use of AI tools and practices—contributing to a “10x developer” mindset focused on outcomes, not just effort.
- Comfortable working in a fast-paced, high-leverage environment where embracing automation and AI-first workflows is part of the engineering culture.
Required Skills
- Strong experience in Go language
- Strong experience in backend technologies and cloud-native environments.
- Proficiency in designing and maintaining large-scale databases.
- Strong problem-solving skills and familiarity with modern software development practices.
- Excellent communication skills.
Preferred Skills
- Experience with AI-assisted development tools.
- Knowledge of performance optimization techniques.
- Experience in Agile methodologies.
About the Company
MyYogaTeacher is a fast-growing health tech startup with a mission to improve the physical and mental well-being of the entire planet. We are the first online marketplace to connect qualified Fitness and Yoga coaches from India with consumers worldwide to provide personalized 1-on-1 sessions via live video conference (app, web). We started in 2019 and have been showing tremendous traction with rave customer reviews.
- Over 200,000 happy customers
- Over 335,000 5 star reviews
- Over 150 Highly qualified coaches on the platform
- 95% of sessions are being completed with 5-star rating
Headquartered in California, with operations based in Bangalore, we are dedicated to providing exceptional service and promoting the benefits of yoga and fitness coaching worldwide
Role Description
This is a full-time on-site role in Bengaluru for a Full Stack Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.
Qualifications
- Back-End Web Development and Full-Stack Development skills
- Front-End Development and Software Development skills
- Proficiency in Cascading Style Sheets (CSS)
- Experience with Python, Django, and Flask frameworks
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
- Front end - React.js
- Data Engineering: Useful experience blending data engineering with core software engineering.
- Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
- CI/CD Tools: Familiarity with Github Actions is a plus.
- Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
- Code Optimization: Proficient in profiling and optimizing Python code.
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
Job Description: Business Analyst – Data Integrations
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We’re looking for a skilled Business Analyst – Data Integrations who can bridge the gap
between business operations and technology teams, ensuring smooth, efficient, and scalable
integrations. If you’re passionate about hospitality tech and enjoy solving complex data
challenges, we’d love to hear from you!
What You’ll Do
Key Responsibilities
Collaborate with vendors to gather requirements for API development and ensure
technical feasibility.
Collect API documentation from vendors; document and explain business logic to
use external data sources effectively.
Access vendor applications to create and validate sample data; ensure the accuracy
and relevance of test datasets.
Translate complex business logic into documentation for developers, ensuring
clarity for successful integration.
Monitor all integration activities and support tickets in Jira, proactively resolving
critical issues.
Lead QA testing for integrations, overseeing pilot onboarding and ensuring solution
viability before broader rollout.
Document onboarding processes and best practices to streamline future
integrations and improve efficiency.
Build, train, and deploy machine learning models for forecasting, pricing, and
optimization, supporting strategic goals.
Drive end-to-end execution of data integration projects, including scoping, planning,
delivery, and stakeholder communication.
Gather and translate business requirements into actionable technical specifications,
liaising with business and technical teams.
Oversee maintenance and enhancement of existing integrations, performing RCA
and resolving integration-related issues.
Document workflows, processes, and best practices for current and future
integration projects.
Continuously monitor system performance and scalability, recommending
improvements to increase efficiency.
Coordinate closely with Operations for onboarding and support, ensuring seamless
handover and issue resolution.
Desired Skills & Qualifications
Strong experience in API integration, data analysis, and documentation.
Familiarity with Jira for ticket management and project workflow.
Hands-on experience with machine learning model development and deployment.
Excellent communication skills for requirement gathering and stakeholder
engagement.
Experience with QA test processes and pilot rollouts.
Proficiency in project management, data workflow documentation, and system
monitoring.
Ability to manage multiple integrations simultaneously and work cross-functionally.
Required Qualifications
Experience: Minimum 4 years in hotel technology or business analytics, preferably
handling data integration or system interoperability projects.
Technical Skills:
Basic proficiency in SQL or database querying.
Familiarity with data integration concepts such as APIs or ETL workflows
(preferred but not mandatory).
Eagerness to learn and adapt to new tools, platforms, and technologies.
Hotel Technology Expertise: Understanding of systems such as PMS, CRS, Channel
Managers, or RMS.
Project Management: Strong organizational and multitasking abilities.
Problem Solving: Analytical thinker capable of troubleshooting and driving resolution.
Communication: Excellent written and verbal skills to bridge technical and non-
technical discussions.
Attention to Detail: Methodical approach to documentation, testing, and deployment.
Preferred Qualification
Exposure to debugging tools and troubleshooting methodologies.
Familiarity with cloud environments (AWS).
Understanding of data security and privacy considerations in the hospitality industry.
Why LodgIQ?
Join a fast-growing, mission-driven company transforming the future of hospitality.
Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
Competitive salary and performance bonuses.
For more information, visit https://www.lodgiq.com

Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
Senior Python Django Developer
Experience: Back-end development: 6 years (Required)
Location: Bangalore/ Bhopal
Job Description:
We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.
This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.
Responsibilities:
- Design and develop scalable, secure, and high-performance applications using Python (Django framework).
- Architect system components, define database schemas, and optimize backend services for speed and efficiency.
- Lead and implement design patterns and software architecture best practices.
- Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
- Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
- Drive performance improvements, monitor system health, and troubleshoot production issues.
- Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
- Contribute to technical decision-making and mentor junior developers.
Requirements:
- 6 to 10 years of professional backend development experience with Python and Django.
- Strong background in payments/financial systems or FinTech applications.
- Proven experience in designing software architecture in a microservices or modular monolith environment.
- Experience working in fast-paced startup environments with agile practices.
- Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
- Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
- Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
- Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).
Preferred Skills:
- Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
- Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
- Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
- Contributions to open-source or personal finance-related projects.
Job Types: Full-time, Permanent
Schedule:
- Day shift
Supplemental Pay:
- Performance bonus
- Yearly bonus
Ability to commute/relocate:
- JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
Required Qualifications
● Experience: 5-8 years of professional experience in software engineering, with a strong
background in developing and deploying scalable applications.
● Technical Skills:
○ Architecture: Demonstrated experience in architecture/ system design for scale,
preferably as a digital public good
○ Full Stack: Extensive experience with full-stack development, including mobile
app development and backend technologies.
○ App Development: Hands-on experience building and launching mobile
applications, preferably for Android.
○ Cloud Infrastructure: Familiarity with cloud platforms and containerization
technologies (Docker, Kubernetes).
○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.
● Soft Skills:
○ Experience in hiring team members
○ A proactive and independent problem-solver, comfortable working in a fast-paced
environment.
○ Excellent communication and leadership skills, with the ability to mentor junior
engineers.
○ A strong desire to use technology for social good.
Preferred Qualifications
● Experience working in a startup or smaller team environment.
● Familiarity with the healthcare or public health sector.
● Experience in developing applications for low-resource environments.
● Experience with data management in privacy and security-sensitive applications.
We're seeking an experienced Engineer to join our engineering team, handling massive-scale data processing and analytics infrastructure that supports over 1B daily events, 3M+ DAU, and 50k+ hours of content. The ideal candidate will bridge the gap between raw data collection and actionable insights, while supporting our ML initiatives.
Key Responsibilities
- Lead and scale the Infrastructure Pod, setting technical direction for data, platform, and DevOps initiatives.
- Architect and evolve our cloud infrastructure to support 1B+ daily events — ensuring reliability, scalability, and cost efficiency.
- Collaborate with Data Engineering and ML pods to build high-performance pipelines and real-time analytics systems.
- Define and implement SLOs, observability standards, and best practices for uptime, latency, and data reliability.
- Mentor and grow engineers, fostering a culture of technical excellence, ownership, and continuous learning.
- Partner with leadership on long-term architecture and scaling strategy — from infrastructure cost optimization to multi-region availability.
- Lead initiatives on infrastructure automation, deployment pipelines, and platform abstractions to improve developer velocity.
- Own security, compliance, and governance across infrastructure and data systems.
Who You Are
- Previously a Tech Co-founder / Founding Engineer / First Infra Hire who scaled a product from early MVP to significant user or data scale.
- 5–12 years of total experience, with at least 2+ years in leadership or team-building roles.
- Deep experience with cloud infrastructure (AWS/GCP),
- Experience with containers (Docker, Kubernetes), and IaC tools (Terraform, Pulumi, or CDK).
- Hands-on expertise in data-intensive systems, streaming (Kafka, RabbitMQ, Spark Streaming), and distributed architecture design.
- Proven experience building scalable CI/CD pipelines, observability stacks (Prometheus, Grafana, ELK), and infrastructure for data and ML workloads.
- Comfortable being hands-on when needed — reviewing design docs, debugging issues, or optimizing infrastructure.
- Strong system design and problem-solving skills; understands trade-offs between speed, cost, and scalability.
- Passionate about building teams, not just systems — can recruit, mentor, and inspire engineers.
Preferred Skills
- Experience managing infra-heavy or data-focused teams.
- Familiarity with real-time streaming architectures.
- Exposure to ML infrastructure, data governance, or feature stores.
- Prior experience in the OTT / streaming / consumer platform domain is a plus.
- Contributions to open-source infra/data tools or strong engineering community presence.
What We Offer
- Opportunity to build and scale infrastructure from the ground up, with full ownership and autonomy.
- High-impact leadership role shaping our data and platform backbone.
- Competitive compensation + ESOPs.
- Continuous learning budget and certification support.
- A team that values velocity, clarity, and craftsmanship.
Success Metrics
- Reduction in infra cost per active user and event processed.
- Increase in developer velocity (faster pipeline deployments, reduced MTTR).
- High system availability and data reliability SLAs met.
- Successful rollout of infra automation and observability frameworks.
- Team growth, retention, and technical quality.
- 5+ years full-stack development
- Proficiency in AWS cloud-native development
- Experience with microservices & async architectures
- Strong TypeScript proficiency
- Strong Python proficiency
- React.js expertise
- Next.js expertise
- PostgreSQL + PostGIS experience
- GraphQL development experience
- Prisma ORM experience
- Experience in B2C product development(Retail/Ecommerce)
- Looking for candidates based out of Bangalore only
CTC: up to 40 LPA
If interested kindly share your updated resume at 82008 31681
We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.
What You’ll Do
- Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
- Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
- Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
- Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
- Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
- Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.
What Makes You a Strong Fit
- You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
- You think in systems and second-order effects—not just in ticket-by-ticket outputs.
- You prefer well-reasoned defaults over overengineering.
- You take ownership—not just of code, but of the outcomes it enables.
- You work cleanly, write clear code, and make life easier for those who come after you.
- You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.
Bonus if You Have Experience With
- Building tools or workflows that accelerate other developers.
- Working with AI coding tools and integrating them meaningfully into your workflow.
- Building for SaaS products, especially those with large user bases or self-serve motions.
- Working in small, fast-moving product teams with a high bar for ownership.
Why Join Us
- A small team that values craftsmanship, curiosity, and momentum.
- A product-driven culture where engineering decisions are informed by customer outcomes.
- A chance to work on multiple zero-to-one opportunities with strong PMF.
- No vanity perks—just meaningful work with people who care.
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
Required Skills & Experience
- Must have 8+ years relevant experience in Java Design Development.
- Extensive experience working on solution design and API design.
- Experience in Java development at an enterprise level (Spring Boot, Java 17+, Spring Security, Microservices, Spring).
- Extensive work experience in monolithic applications using Spring.
- Extensive experience leading API development and integration (REST/JSON).
- Extensive work experience using Apache Camel.
- In-depth technical knowledge of database systems (Oracle, SQL Server).
- Ability to refactor and optimize existing code for performance, readability, and maintainability.
- Experience working with Continuous Delivery/Continuous Integration (CI/CD) pipelines.
- Experience in container platforms (Docker, OpenShift, Kubernetes).
- DevOps knowledge including:
- Configuring continuous integration, deployment, and delivery tools like Jenkins or Codefresh
- Container-based development using Docker, Kubernetes, and OpenShift
- Instrumenting monitoring and logging of applications
Requirements
- 6–12 years of backend development experience.
- Strong expertise in Java 11+, Spring Boot, REST APIs, AWS.
- Solid experience with distributed, high-volume systems.
- Strong knowledge of RDBMS (e.g., MySQL, Oracle) and NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra).
- Hands-on with CI/CD (Jenkins) and caching technologies Redis or Similar.
- Strong debugging and system troubleshooting skills.
- Experience in payments system is a must.
Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.
Key Responsibilities:
- Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
- Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
- Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
- Optimize workloads post-migration for cost, performance, and security.
- Collaborate with stakeholders to define migration roadmaps and timelines.
- Perform data migration, application re-architecture, and hybrid cloud setups.
- Monitor migration progress, troubleshoot issues, and ensure business continuity.
- Document processes and provide post-migration support and training.
- Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.
Required Qualifications:
- 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
- AWS Certified Solutions Architect or Migration Specialty certification preferred.
- Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
- Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
- Experience with infrastructure as code (CloudFormation, Terraform).
- Proficiency in scripting (Python, PowerShell) and automation.
- Familiarity with security best practices (IAM, encryption, compliance).
- Hands-on experience with Kubernetes/EKS networking components and best practices.
Preferred Skills:
- Experience with hybrid/multi-cloud environments.
- Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
- Excellent problem-solving and communication skills.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.
Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.
Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.
Testing of API endpoints.
Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.
Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.
Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.
Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.
Please note that salary will be based on experience.
Job Title: Full Stack Engineer
Location: Bengaluru (Indiranagar) – Work From Office (5 Days)
Job Summary
We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.
Responsibilities
- Design, develop, and maintain scalable full-stack applications.
- Build responsive, high-performance UIs using Typescript & Next.js.
- Develop backend services and APIs using Python (FastAPI/Django).
- Work closely with product, design, and business teams to translate requirements into intuitive solutions.
- Contribute to architecture discussions and drive technical best practices.
- Own features end-to-end — design, development, testing, deployment, and monitoring.
- Ensure robust security, code quality, and performance optimization.
Tech Stack
Frontend: Typescript, Next.js, React, Tailwind CSS
Backend: Python, FastAPI, Django
Databases: PostgreSQL, MongoDB, Redis
Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD
Other Tools: Git, GitHub, Elasticsearch, Observability tools
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience.
- Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
- Experience building RESTful services and microservices.
- Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
- Strong debugging, problem-solving, and optimization skills.
- Ability to thrive in fast-paced, high-ownership startup environments.
Good-to-Have:
- Exposure to Docker, Kubernetes, and observability tools.
- Experience with message queues or event-driven architecture.
Perks & Benefits
- Upskilling support – courses, tools & learning resources.
- Fun team outings, hackathons, demos & engagement initiatives.
- Flexible Work-from-Home: 12 WFH days every 6 months.
- Menstrual WFH: up to 3 days per month.
- Mobility benefits: relocation support & travel allowance.
- Parental support: maternity, paternity & adoption leave.
Job Title : Full Stack Engineer (Python + React.js/Next.js)
Experience : 1 to 6+ Years
Location : Bengaluru (Indiranagar)
Employment : Full-Time
Working Days : 5 Days WFO
Notice Period : Immediate to 30 Days
Role Overview :
We are seeking Full Stack Engineers to build scalable, high-performance fintech products.
You will work on both frontend (Typescript/Next.js) and backend (Python/FastAPI/Django), owning features end-to-end and contributing to architecture, performance, and product innovation.
Main Tech Stack :
Frontend : Typescript, Next.js, React
Backend : Python, FastAPI, Django
Database : PostgreSQL, MongoDB, Redis
Cloud : AWS/GCP, Docker, Kubernetes
Tools : Git, GitHub, CI/CD, Elasticsearch
Key Responsibilities :
- Develop full-stack applications with clean, scalable code.
- Build fast, responsive UIs using Typescript, Next.js, React.
- Develop backend APIs using Python, FastAPI, Django.
- Collaborate with product/design to implement solutions.
- Own development lifecycle: design → build → deploy → monitor.
- Ensure performance, reliability, and security.
Requirements :
Must-Have :
- 1–6+ years of full-stack experience.
- Product-based company background.
- Strong DSA + problem-solving skills.
- Proficiency in either frontend or backend with familiarity in both.
- Hands-on experience with APIs, microservices, Git, CI/CD, cloud.
- Strong communication & ownership mindset.
Good-to-Have :
- Experience with containers, system design, observability tools.
Interview Process :
- Coding Round : DSA + problem solving
- System Design : LLD + HLD, scalability, microservices
- CTO Round : Technical deep dive + cultural fit
Planview is seeking a passionate Sr Software Engineer I to lead the development of internal AI tools and connectors, enabling seamless integration with internal and third-party data sources. This role will drive internal AI enablement and productivity across engineering and customer teams by consulting with business stakeholders, setting technical direction, and delivering scalable solutions.
Responsibilities:
- Work with business stakeholders to enable successful AI adoption.
- Develop connectors leveraging MCP or third-party APIs to enable new integrations.
- Prioritize and execute integrations with internal and external data platforms.
- Collaborate with other engineers to expand AI capabilities.
- Establish and monitor uptime metrics, set up alerts, and follow a proactive maintenance schedule.
- Exposure to operations, including Docker-based and serverless deployments and troubleshooting.
- Work with DevOps engineers to manage and deploy new tools as required.
Required Qualifications:
- Bachelor’s degree in computer science, Data Science, or related field.
- 4+ years of experience in infrastructure engineering, data integration, or AI operations.
- Strong Python coding skills.
- Experience configuring and scaling infrastructure for large user bases.
- Proficiency with monitoring tools, alerting systems, and maintenance best practices.
- Hands-on experience with containerized and serverless deployments.
- Ability to code connectors using MCP or third-party APIs.
- Strong troubleshooting and support skills.
Preferred Qualifications:
- Experience with building RAG knowledge bases, MCP Servers, and API integration patterns. Experience leveraging AI (LLMs) to boost productivity and streamline workflows.
- Exposure to working with business stakeholders to drive AI adoption and feature expansion. Familiarity with MCP server support and resilient feature design.
- Skilled at working as part of a global, diverse workforce.
- AWS Certification is a plus.
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
About Company:
HealthAsyst® is an IT service and product company. It is a leading provider of IT services to some of the largest healthcare IT vendors in the United States. We bring the value of cutting-edge technology through our deep expertise in product engineering, custom software development, testing, large scale healthcare IT implementation and integrations, on-going maintenance and support, BI & Analytics, and remote monitoring platforms.
As a true partner, we help our customers navigate a complex regulatory landscape, deal with cost pressures, and offer high-quality services. As a healthcare transformation agent, we enable innovation in technology and accelerate problem-solving while delivering unmatched cost benefits to healthcare technology companies.
Founded : 1999
Location : Anjaneya Techno Park, HAL Old Airport Road, Bangalore. Products : CheckinAsyst, RadAsyst
IT Services : Product Engineering, Custom Development, QA & Testing, Integration, Maintenance, Managed Services.
Position Overview:
We are seeking a highly skilled and motivated Associate Architect for Web Applications to join our dynamic product development team. The ideal candidate will have a strong background in web application architecture, design, and development, along with a passion for staying up to date with the latest industry trends and technologies. As an Associate Architect, you will collaborate with cross-functional teams, mentor junior developers, and play a critical role in shaping the technical direction of our web applications
Qualifications:
- Bachelor’s or master’s degree in computer science, Software Engineering, or a related field.
- Overall, 9 to 15 years’ experience candidate,
- Proven experience (8 years) in designing and developing web applications using modern web technologies and frameworks (e.g., JavaScript, React, jQuery Mobile, MVC and ASP.net ).
- Strong understanding of software architecture principles, design patterns, and best practices.
- Demonstrated experience in mentoring and leading development teams.
- Proficiency in database design
- Excellent problem-solving skills and the ability to tackle complex technical challenges.
- Familiarity with cloud platforms (Azure) and containerization technologies (Docker) is a plus.
- Effective communication skills and the ability to collaborate with both technical and non-technical stakeholders.
- Up-to-date knowledge of industry trends, emerging technologies, and best practices in web application development.
Key Responsibilities:
Web Application Architecture:
- Collaborate with stakeholders, including business analysts and Solutions, to understand product requirements and translate them into scalable and efficient web application using the right architectural designs.
- Design architectural patterns, system components, and data models to ensure a robust and maintainable application structure.
- Evaluate and recommend appropriate technology stacks, frameworks, and tools to achieve project goals.
Technical Leadership and Mentorship:
- Provide technical guidance and mentorship to junior developers, fostering their growth and professional development.
- Lead code reviews, ensure optimized/scalable code is written, architectural discussions, and brainstorming sessions to ensure high-quality, well-architected solutions.
- Share best practices and coding standards with the development team to ensure consistent and efficient coding practices.
Development and Coding:
- Participate in hands-on development of components and features, ensuring code quality, performance, and security.
- Collaborate with front-end and back-end developers. With full stack development experience, ensure integrations are done well within the technical stack and partner systems.
- Troubleshoot complex technical issues, provide workaround, and contribute to debugging efforts.
Performance and Scalability:
- Optimize application performance by analysing and addressing bottlenecks, ensuring responsive and efficient user experiences.
- Design and implement strategies for horizontal and vertical scalability to support increasing user loads and data volumes.
Collaboration and Communication:
- Work closely with cross-functional teams, including Architects, QA engineers, Business analyst, automation resource to deliver features on time.
- Effectively communicate technical concepts to non-technical stakeholders, contributing to project planning, progress tracking, and decision-making.
Job Description: Python Engineer
Role Summary
We are looking for a talented Python Engineer to design, develop, and maintain high-quality backend applications and automation solutions. The ideal candidate should have strong programming skills, familiarity with modern development practices, and the ability to work in a fast-paced, collaborative environment.
Key Responsibilities:
Python Development & Automation
- Design, develop, and maintain Python scripts, tools, and automation frameworks.
- Build automation for operational tasks such as deployment, monitoring, system checks, and maintenance.
- Write clean, modular, and well-documented Python code following best practices.
- Develop APIs, CLI tools, or microservices when required.
Linux Systems Engineering
- Manage, configure, and troubleshoot Linux environments (RHEL, CentOS, Ubuntu).
- Perform system performance tuning, log analysis, and root-cause diagnostics.
- Work with system services, processes, networking, file systems, and security controls.
- Implement shell scripting (bash) alongside Python for system-level automation.
CI/CD & Infrastructure Support
- Support integration of Python automation into CI/CD pipelines (Jenkins).
- Participate in build and release processes for infrastructure components.
- Ensure automation aligns with established infrastructure standards and governance.
- Use Bash scripting together with Python to improve automation efficiency.
Cloud & DevOps Collaboration (if applicable)
- Collaborate with Cloud/DevOps engineers on automation for AWS or other cloud platforms.
- Integrate Python tools with configuration management tools such as Chef or Ansible, or with Terraform modules.
- Contribute to containerization efforts (Docker, Kubernetes) leveraging Python automation.
Senior Software Engineer
Location: Hyderabad, India
Who We Are:
Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.
What We Do:
At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.
What You’ll Do:
Build, Innovate, and Own:
- Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
- Architect and optimize data pipelines and storage solutions that power our AI-driven products.
- Collaborate closely with AI and data teams to bring machine learning models into production systems.
- Build integrations with external services and APIs to enable scalable, interoperable solutions.
- Ensure robust security, scalability, and observability across distributed systems.
- Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.
Responsibilities will include but are not limited to:
- Provide technical guidance and code reviews that raise the bar for quality and performance.
- Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.
What You’ll Need:
- Bachelor’s degree in Computer Science or equivalent practical experience.
- 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
- Strong experience with SQL and NoSQL data stores.
- Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
- Proven ability to design for performance, reliability, and security in data-intensive systems.
- Excellent communication skills and ability to work effectively in a global, cross-functional environment.
Set Yourself Apart With:
- Startup experience - specifically in building product from 0-1
- Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
- Experience in healthcare or fintech domains.
- Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).
Equal Employer/Veterans/Disabled
Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.
Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two

is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. Expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
Required Skills
• 12+ years of proven experience in designing large-scale enterprise systems and distributed
architectures.
• Strong expertise in Azure, AWS, Python, Docker, LangChain, Solution Architecture, C#, .Net
• Frontend technologies like React, Angular and, ASP.Net MVC
• Deep knowledge of architecture frameworks (TOGAF).
• Understanding of security principles, identity management, and data protection.
• Experience with solution architecture methodologies and documentation standards
• Deep understanding of databases (SQL and NoSQL), RESTful APIs, and message brokers.
• Excellent communication, leadership, and stakeholder management skills.
Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)
Role Overview
Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.
Key Responsibilities
1. Client Engagement (India + International Markets)
- Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
- Explain Tradelab’s capabilities, architecture, and deployment options.
- Understand region-specific latency expectations, connectivity options, and regulatory constraints.
2. Requirement Gathering & Solutioning
- Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
- Assess infra readiness (cloud/on-prem/colo).
- Propose architecture aligned with forex markets.
3. Global Architecture & Deployment Design
- Design multi-region infrastructure using AWS/Azure/GCP.
- Architect low-latency routing between India–Singapore–Dubai.
- Support deployments in DCs like Equinix SG1/DX1.
4. Networking & Security Architecture
- Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
- Implement network hardening, segmentation, WAF/firewall rules.
5. DevOps, Cloud Engineering & Scalability
- Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
- Design global failover models.
6. BFSI & Trading Domain Expertise
- Indian broking, international forex, LP aggregation, HFT.
- OMS/RMS, risk engines, LP connectivity, and matching engines.
7. Latency, Performance & Capacity Planning
- Benchmark and optimize cross-region latency.
- Tune performance for high tick volumes and volatility bursts.
8. Documentation & Consulting
- Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
- Required Skills
- AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
- DevOps: Kubernetes, Docker, Helm, Terraform.
- Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
- Message buses: Kafka, RabbitMQ, Redis Streams.
Domain Skills
- Deep Broking Domain Understanding.
- Indian broking + global forex/CFD.
- FIX protocol, LP integration, market data feeds.
- Regulations: SEBI, DFSA, MAS, ESMA.
Soft Skills
- Excellent communication and client-facing ability.
- Strong presales and solutioning mindset.
- Preferred Qualifications
- B.Tech/BE/M.Tech in CS or equivalent.
- AWS Architect Professional, CCNP, CKA.
Why Join Us?
- Experience in colocation/global trading infra.
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
We are Looking for "IoT Migration Architect (Azure to AWS)"- Contract to Hire role.
"IoT Migration Architect (Azure to AWS)" – Role 1
Salary between 28LPA -33 LPA -Fixed
We have Other Positions as well in IOT.
- IoT Solutions Engineer - Role 2
- IoT Architect – 8+ Yrs - Role -3
Designed end to end IoT Architecture, Define Strategy, Integrate Hardware, /Software /Cloud Components.
Skills -Cloud Platform, AWS IoT, Azure IoT, Networking Protocols,
Experience in Large Scale IoT Deployment.
Contract to Hire role.
Location – Pune/Hyderabad/Chennai/ Bangalore
Work Mode -2-3 days Hybrid from Office in week.
Duration -Long Term, With Potential for full time conversion based on Performance and Business needs.
How much notice period we can consider.
15-25 Days( Not more than that)
Client Company – One of Leading Technology Consulting
Payroll Company – One of Leading IT Services & Staffing Company ( This company has a presence in India, UK, Europe , Australia , New Zealand, US, Canada, Singapore, Indonesia, and Middle east.
Highlights of this role.
• It’s a long term role.
• High Possibility of conversion within 6 Months or After 6 months ( if you perform well).
• Interview -Total 2 rounds of Interview ( Both Virtual), but one face to face meeting is mandatory @ any location - Pune/Hyderabad /Bangalore /Chennai.
Point to be remember.
1. You should have valid experience cum relieving letters of your all past employer.
2. Must have available to join within 15 days’ time.
3. Must be ready to work 2-3 days from Client Office in a week.
4. Must have PF service history of last 4 years in Continuation
What we offer During the role.
- Competitive Salary
- Flexible working hours and hybrid work mode.
- Potential for full time conversion, Including Comprehensive Benefits, PF, Gratuity, Paid leave, Paid Holiday (as per client), Health Insurance and form 16.
How to Apply.
- Pls fill the given below summary sheet.
- Pls provide UAN Service history
- Latest Photo.
IoT Migration Architect (Azure to AWS) - Job Description
Job Title: IoT Migration Architect (Azure to AWS)
Experience Range: 10+ Years
Role Summary
The IoT Migration Architect is a senior-level technical expert responsible for providing architecture leadership, design, and hands-on execution for migrating complex Internet of Things (IoT) applications and platforms from Microsoft Azure to Amazon Web Services (AWS). This role requires deep expertise in both Azure IoT and the entire AWS IoT ecosystem, ensuring a seamless, secure, scalable, and cost-optimized transition with minimal business disruption.
Required Technical Skills & Qualifications
10+ years of progressive experience in IT architecture, with a minimum of 4+ years focused on IoT Solution Architecture and Cloud Migrations.
Deep, hands-on expertise in the AWS IoT ecosystem, including design, implementation, and operations (AWS IoT Core, Greengrass, Device Management, etc.).
Strong, hands-on experience with Azure IoT services, specifically Azure IoT Hub, IoT Edge, and related data/compute services (e.g., Azure Stream Analytics, Azure Functions).
Proven experience in cloud-to-cloud migration projects, specifically moving enterprise-grade applications and data, with a focus on the unique challenges of IoT device and data plane migration.
Proficiency with IoT protocols such as MQTT, AMQP, HTTPS, and securing device communication (X.509).
Expertise in Cloud-Native Architecture principles, microservices, containerization (Docker/Kubernetes/EKS), and Serverless technologies (AWS Lambda).
Solid experience with CI/CD pipelines and DevOps practices in a cloud environment (e.g., Jenkins, AWS Code Pipeline, GitHub Actions).
Strong knowledge of database technologies, both relational (e.g., RDS) and NoSQL (e.g., DynamoDB).
Certifications Preferred: AWS Certified Solutions Architect (Professional level highly desired), or other relevant AWS/Azure certifications.
Your full Name ( Full Name means full name) –
Contact NO –
Alternate Contact No-
Email ID –
Alternate Email ID-
Total Experience –
Experience in IoT –
Experience in AWS IoT-
Experience in Azure IoT –
Experience in Kubernetes –
Experience in Docker –
Experience in EKS-
Do you have valid passport –
Current CTC –
Expected CTC –
What is your notice period in your current Company-
Are you currently working or not-
If not working then when you have left your last company –
Current location –
Preferred Location –
It’s a Contract to Hire role, Are you ok with that-
Highest Qualification –
Current Employer (Payroll Company Name)
Previous Employer (Payroll Company Name)-
2nd Previous Employer (Payroll Company Name) –
3rd Previous Employer (Payroll Company Name)-
Are you holding any Offer –
Are you Expecting any offer -
Are you open to consider Contract to Hire role , It’s a C2H Role-
PF Deduction is happening in Current Company –
PF Deduction happened in 2nd last Employer-
PF Deduction happened in 3 last Employer –
Latest Photo –
UAN Service History -
Shantpriya Chandra
Director & Head of Recruitment.
Harel Consulting India Pvt Ltd
https://www.linkedin.com/in/shantpriya/
www.harel-consulting.com
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed
Role Description
- Develop the tech stack of Pieworks to achieve the fly-wheel in the most efficient manner
- Focus on standardising the code, making it more modular to enable quick updations
- Integaring with various APIs to provide seamless solution to all stakeholders
- Build a robust node based information tracking & flow to capitalize on degrees of separation between members, candidates
- Bring in new design ideas to make the UI stunning and UX functional i.e one click actionable, as much as possible
Mandatory Criteria
- Ability to code in Java
- Ability to Scale on AWS
- Product Thinking
- Passion for Automation
If interested kindly share your updated resume at 82008 31681
Job Description.
1. Cloud experience (Any cloud is fine although AWS is preferred. If non-AWS cloud, then the experience should reflect familiarity with the cloud's common services)
2. Good grasp of Scripting (in Linux for sure ie bash/sh/zsh etc, Windows : nice to have)
3. Python or Java or JS basic knowledge (Python Preferred)
4. Monitoring tools
5. Alerting tools
6. Logging tools
7. CICD
8. Docker/containers/(k8s/terraform nice to have)
9. Experience working on distributed applications with multiple services
10. Incident management
11. DB experience in terms of basic queries
12. Understanding of performance analysis of applications
13. Idea about data pipelines would be nice to have
14. Snowflake querying knowledge: nice to have
The person should be able to :
Monitor system issues
Create strategies to detect and address issues
Implement automated systems to troubleshoot and resolve issues.
Write and review post-mortems
Manage infrastructure for multiple product teams
Collaborate with product engineering teams to ensure best practices are being followed.
🔧 Key Skills
- Strong expertise in Python (3.x)
- Experience with Django / Flask / FastAPI
- Good understanding of Microservices & RESTful API development
- Proficiency in MySQL/PostgreSQL – queries, stored procedures, optimization
- Solid grip on Data Structures & Algorithms (DSA)
- Comfortable working with Linux & Windows environments
- Hands-on experience with Git, CI/CD (Jenkins/GitHub Actions)
- Familiarity with Docker / Kubernetes is a plus
Software Development Engineer III (Frontend)
About the company:
At WizCommerce, we’re building the AI Operating System for Wholesale Distribution — transforming how manufacturers, wholesalers, and distributors sell, serve, and scale.
With a growing customer base across North America, WizCommerce helps B2B businesses move beyond disconnected systems and manual processes with an integrated, AI-powered platform.
Our platform brings together everything a wholesale business needs to sell smarter and faster. With WizCommerce, businesses can:
- Take orders easily — whether at a trade show, during customer visits, or online.
- Save hours of manual work by letting AI handle repetitive tasks like order entry or creating product content.
- Offer a modern shopping experience through their own branded online store.
- Access real-time insights on what’s selling, which customers to focus on, and where new opportunities lie.
The wholesale industry is at a turning point — outdated systems and offline workflows can no longer keep up. WizCommerce brings the speed, intelligence, and design quality of modern consumer experiences to the B2B world, helping companies operate more efficiently and profitably.
Backed by leading global investors including Peak XV Partners (formerly Sequoia Capital India), Z47 (formerly Matrix Partners), Blume Ventures, and Alpha Wave Global, we’re rapidly scaling and redefining how wholesale and distribution businesses sell and grow.
If you want to be part of a fast-growing team that’s disrupting a $20 trillion global industry, WizCommerce is the place to be.
Read more about us in Economic Times, The Morning Star, YourStory, or on our website!
Founders:
Divyaanshu Makkar (Co-founder, CEO)
Vikas Garg (Co-founder, CCO)
Job Description:
Role & Responsibilities:
- Design, develop, and maintain complex web applications using ReactJS, and relevant web technologies.
- Work closely with Product Managers, Designers, and other stakeholders to understand requirements and translate them into technical specifications and deliverables.
- Take ownership of technical decisions, code reviews, and ensure best practices are followed in the team.
- Provide technical leadership and mentorship to junior developers, promoting their professional growth and skill development.
- Collaborate with cross-functional teams to integrate web applications with other systems and platforms.
- Stay up-to-date with emerging trends and technologies in web development to drive continuous improvement and innovation.
- Contribute to the design and architecture of the frontend codebase, ensuring high-quality, maintainable, and scalable code.
Requirements:
- Bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience in frontend development using ReactJS, Redux, and related web technologies.
- Strong understanding of web development concepts, including HTML, CSS, JavaScript, and responsive design principles.
- Experience with modern web development frameworks and tools such as ReactJS, Redux, Webpack, and Babel.
- Experience working in an Agile development environment and delivering software in a timely and efficient manner.
- Strong verbal and written communication skills, with the ability to effectively collaborate with cross-functional teams and stakeholders.
- Ability to take ownership of projects, prioritize tasks, and meet deadlines.
- Experience with backend development and AWS is a plus.
Benefits:
- Opportunity to work in a fast-paced, growing B2B SaaS company.
- Collaborative and innovative work environment.
- Competitive salary and benefits package.
- Growth and professional development opportunities.
- Flexible working hours to accommodate your schedule.
Compensation: Best in the industry
Role location: Bengaluru/Gurugram
Website Link: https://www.wizcommerce.com/
























