50+ Elastic Search Jobs in India
Apply to 50+ Elastic Search Jobs on CutShort.io. Find your next job, effortlessly. Browse Elastic Search Jobs and apply today!

Job Title: SRE Lead Engineer
Location: Hyderabad, India
Company: Client of Options Executive Search, AI Saas Product Development Company
We are seeking a DevOps / SRE Lead Engineer to architect and scale our client's multi-tenant SaaS platform with AI/ML at the core..
Our client, a fast-growing AI-powered SaaS company in the FinTech space, is looking for a Site Reliability Engineering (SRE) Lead Engineer to join their dynamic team. This is an opportunity to design and operate large-scale SaaS systems that integrate cutting-edge AI/ML capabilities.
About the Role
As the SRE Lead Engineer, you will be responsible for architecting, building, and maintaining infrastructure that powers a multi-tenant SaaS platform. You’ll drive reliability, scalability, and security, while supporting AI/ML pipelines in production. This is a hands-on role with significant ownership, requiring both technical depth and leadership in site reliability practices.
Key Responsibilities
- Architect, design, and deploy end-to-end infrastructure for large-scale, microservices-based SaaS platforms.
- Ensure system reliability, scalability, and security for AI/ML model integrations and data pipelines.
- Automate environment provisioning and management using Terraform in AWS (EKS-focused).
- Implement full-stack observability across applications, networks, and operating systems.
- Lead incident management and participate in 24/7 on-call rotation.
- Optimize SaaS reliability while enabling REST APIs, SSO integrations (Okta/Auth0), and cloud data services (RDS/MySQL, Elasticsearch).
- Define and maintain backup and disaster recovery for critical workloads.
Required Skills & Experience
- 8+ years in SRE/DevOps roles, managing enterprise SaaS applications in production.
- Minimum 1 year experience with AI/ML infrastructure or model-serving environments.
- Strong expertise in AWS cloud, particularly EKS, container orchestration, and Kubernetes.
- Hands-on experience with Infrastructure as Code (Terraform), Docker, and scripting (Python, Bash).
- Solid Linux OS and networking fundamentals.
- Experience in monitoring and observability with ELK, CloudWatch, or similar tools.
- Strong track record with microservices, REST APIs, SSO, and cloud databases.
Nice-to-Have Skills
- Experience with MLOps and AI/ML pipeline observability.
- Cost optimization and security hardening in multi-tenant SaaS.
- Prior exposure to FinTech or enterprise finance solutions.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related discipline.
- AWS Certified Solutions Architect (strongly preferred).
- Experience in early-stage or high-growth startups is an advantage.
Why Join?
- Be at the forefront of AI/ML-powered SaaS innovation in FinTech.
- Work with a high-energy, entrepreneurial team building next-gen infrastructure.
- Take ownership of mission-critical reliability challenges.
- Grow your career in an environment that values impact, adaptability, and innovation.
If you’re passionate about building secure, scalable, and intelligent platforms, we’d love to hear from you. Apply now to be part of our client’s journey in redefining enterprise finance operations.
Role Overview
We are seeking a motivated and technically versatile CMS Engineer (IC2) to support our transition from SharePoint to Contentful, while also contributing to the broader CMS ecosystem. This is an excellent opportunity for an early-career engineer to work on enterprise-grade platforms and microservice-based architectures.
Key Responsibilities
· 3-5 years of experience with SharePoint Online and/or enterprise CMS platforms.
· Familiarity with Contentful or other headless CMS solutions is a strong plus.
· Hands-on experience with Java, Spring Boot, and relational databases (e.g., PostgreSQL).
· Exposure to Kafka, Elasticsearch, or similar distributed technologies is desirable.
· Solid problem-solving and communication skills with an eagerness to learn.
What would you do here
SharePoint to Contentful Migration | Backend + CMS Integration
· Assist in maintaining and enhancing SharePoint Online content and features during the transition period.
· Support the migration of pages, documents, and metadata from SharePoint to Contentful.
· Contribute to the design and development of backend services that integrate with Contentful using Java, Spring Boot, and REST APIs.
· Write reusable services for content delivery, search indexing (via Elasticsearch), and event processing (via Kafka).
· Help develop APIs for CMS-based applications that interact with PostgreSQL databases.
· Troubleshoot CMS-related issues and support testing efforts during platform migration.
About the Role
We are seeking highly driven Backend Web Developers with strong knowledge of Node.js, TypeScript, and MongoDB. You will play a key role in building and maintaining the backend architecture, APIs, and scalable services powering our web applications.
This position is ideal for candidates who are self-starters, comfortable in a startup environment, and can pick up tasks independently from Day 0.
Key Responsibilities
- Design, develop, and maintain scalable backend services and RESTful APIs using Node.js & TypeScript.
- Work with MongoDB for efficient data modeling, schema design, and query optimization.
- Integrate backend services with frontend applications and third-party APIs.
- Write clean, modular, and efficient code with a strong emphasis on performance and security.
- Ensure error handling, logging, and monitoring are implemented for production readiness.
- Collaborate with frontend developers, product managers, and designers to deliver end-to-end features.
- Implement and maintain microservices architecture.
- Good understanding of deployment processes and willingness to work with AWS stack (EC2, S3, Lambda, etc. – good to have).
Required Skills & Qualifications
- Strong proficiency in Node.js and TypeScript.
- Hands-on experience with MongoDB (Mongoose ORM preferred).
- Solid understanding of Redis, Messaging Queues, etc.
- Knowledge of Git/GitHub with a strong portfolio of deployed projects (blank GitHub profiles will be rejected).
- Strong problem-solving, debugging, and optimization skills.
- Ability to take ownership of tasks and work independently.
- Familiarity with async programming, promises, event loops, and backend architecture concepts.
Preferred Skills
- Prior experience (internship/full-time) in a startup environment.
- Exposure to AWS stack (Lambda, EC2, S3, CloudWatch, RDS, etc.).
- Experience with Docker, CI/CD pipelines, or cloud deployments.
- Understanding of server-side caching and messaging queues.
- Familiarity with testing frameworks (Jest).
Eligibility Criteria
- Experience: 0 – 2 years (Freshers with strong projects are welcome).
- Education: Tier 2 / Tier 3 college graduates preferred.
- GitHub Requirement: Candidates must have solid GitHub profiles with deployed projects. Inactive or blank GitHub accounts will be rejected.
Selection Process
- Written Test – Core programming fundamentals & problem-solving.
- Sample Task – Real-world backend task (API/service implementation).
- Technical Interview (Basic, 30 min) – Node.js, TS, Mongo fundamentals.
- Advanced Technical Interview (90 min) – Deep dive into system design, architecture, scaling, and debugging.
- HR Round – Culture fit and final discussion.
Why Join Us?
- Work in a high-growth startup environment where your contributions have a direct impact.
- Ownership from Day 0 – take responsibility for building and shipping features.
- Learn and grow with a team of passionate engineers.
- Opportunity to work with modern tech stack and real-world problem-solving.
About Fundly
- Fundly is building a retailer centric Pharma Supply Chain platform and Marketplace for over 10 million pharma retailers in India
- Founded by experienced industry professionals with cumulative experience of 30+ years
- Has grown to 60+ people in 12 cities in less than 2 years 4. Monthly disbursement of INR 50 Cr 5. Raised venture capital of USD 5M so far from Accel Partners which is biggest VC Fund of India
Opportunity at Fundly
- Building a retailer centric ecosystem in Pharma Supply Chain
- Fast growing– 3000+ retailers, 36000 Transactions and 200+ Cr disbursement in last 2 years
- Technology First and Customer first fintech organization
- Be an early team member, visible and influence the product and technology roadmap
- Be a leader and own responsibility and accountability
Responsibilities
- Be hands-on and ship good quality code Fast
- Execute and deploy technical solutions
- Understand existing code, maintain and improve it
- Control Technical Debt
- Ensure healthy software engineering practices like planning, estimation, documentation, code review
Qualifications
- 3+ years of Hands-on experience in Java, Spring Boot, Spring MVC, Hibernate, Play
- Hands on experience in SQL and NoSQL databases like Postgres, MongoDB, ElasticSearch, Redis
GaragePlug Inc
GaragePlug is one of the fastest-growing Automotive tech startups working towards revolutionising the automotive aftermarket industry with strong state-of-the-art technologies.
Role Overview
As we plan to grow, we have many challenges to solve. Some of the new features and products that are already in the pipeline include advanced analytics, search, reporting, etc., to name a few. Our present backend is based on the microservices architecture built using Spring Boot. With growing complexity, we are open to using other tools and technologies as needed. We are looking for a talented and motivated engineer to join our fleet and help us solve real-world problems in this exciting field. Join us and share the dream of building the next-generation online platform for the Auto industry.
What you'll do:
- Design and architect our core components
- End-to-end systems development
- Ownership of complete systems from development to production and maintenance
- Infrastructure management on AWS
Technologies you'll use:
- Microservices, AWS, Java, Spring-boot
- Gradle / Maven
- ElasticSearch
- Jenkins, CI/CD
- Containerization technologies like Docker, Kubernetes, etc.
- RDBMS (PostgreSQL) or NoSQL databases (MongoDB) & Enterprise Messaging Applications (Kafka/SQS)
- JUnit, TestNG, Cucumber, etc.
- Nginx
- Any cool piece of technology that you can bring onboard
What you are:
- You love technology and are always open to learning new tools
- You are proficient with server technologies: Spring / Spring Boot
- You have good experience in scaling, performance tuning & optimization at both API and storage layers
- You have an excellent grasp of OOPS concepts, data structures, algorithms, design patterns & REST APIs
- You are proficient in Java, SQL
- You have good knowledge of Databases: RDBMS/Document
- You have a good understanding of REST API design
- You have knowledge of DevOps
- Implement Coding Best Practices. Implement Code Quality gates as per the program norms
- Knowledge of Angular 2+ is a big plus

Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Responsibilities:
- Design, develop, and maintain scalable applications using Java/Kotlin Core Concepts and Spring Boot MVC.
- Build and optimize REST APIs for seamless client-server communication.
- Develop and ensure efficient HTTP/HTTPS request-response mechanisms.
- Handle Java/Kotlin version upgrades confidently, ensuring code compatibility and leveraging the latest features.
- Solve complex business logic challenges with a methodical and innovative approach.
- Write optimized SQL queries with Postgres DB.
- Ensure code quality through adherence to design patterns (e.g., Singleton, Factory, Observer, MVC) and unit testing frameworks like JUnit.
- Integrate third-party APIs and develop large-scale systems with technical precision.
- Debug and troubleshoot production issues.
Requirements:
- 2 to 4 years of hands-on experience in Java/Kotlin Spring Boot development.
- Proven expertise in handling version upgrades for Java and Kotlin with confidence.
- Strong logical thinking and problem-solving skills, especially in implementing complex algorithms.
- Proficiency with Git, JIRA, and managing software package versions.
- Familiarity with SaaS-based products, XML parsing/generation, and generating PDFs, XLS, CSVs using Spring Boot.
- Strong understanding of JPA, Hibernate, and core Java concepts (OOP).
Skills (Good to Have):
- Exposure to Docker, Redis, and Elasticsearch.
- Knowledge of transaction management and solving computational problems.
- Eagerness to explore new technologies.
Job Title : Senior Data Engineer
Experience : 6 to 10 Years
Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)
Notice Period : Immediate to 30 days (Buyout option available)
About the Role :
We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.
This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.
Mandatory Skills :
Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.
Key Responsibilities :
- Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
- Build and optimize data architectures (including Medallion Architecture).
- Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
- Manage structured/unstructured data migration to Data Lake.
- Ensure CI/CD integration for data workflows and version control using Git.
- Identify and integrate data sources (internal/external) in line with business needs.
- Proactively highlight gaps and risks related to data compliance and integrity.
Required Skills :
- Azure Data Factory (ADF) – Mandatory
- Strong SQL and Data Modelling expertise.
- Hands-on with Azure Cloud Services and data architecture.
- Experience with CI/CD tools and version control (Git).
- Good understanding of Data Governance practices.
- Exposure to ETL/ELT pipelines and Data Lake migration.
- Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
- Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
- Strong problem-solving, communication, and collaboration skills.
Additional Info :
- Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
- Communication : Above-average verbal and written English skills.
Perks & Benefits :
- 5 Days work week
- Global exposure and leadership collaboration.
- Health insurance, employee-friendly policies, training and development.
We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft.
Key Responsibilities:
Area
What You’ll Own
Architecture & Delivery
• Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc...
• Lead design and code reviews; enforce best practices for testing, CI/CD, observability, security, and cost-efficient cloud operations.
• Drive technical roadmaps, ensuring scalability (billions of events, 99.9 %+ uptime) and rapid feature delivery.
Team Leadership & Growth
• Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones.
• Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture.
• Coach the team on AI-assisted development workflows (e.g., GitHub Copilot, LLM-based code review) to boost productivity and code quality.
Stakeholder Collaboration
• Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions.
• Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions.
Technical Vision & Governance
• Own coding standards, architectural principles, and technology selection.
• Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies.
• Balance technical debt and new feature delivery through data-driven prioritization.
Required Qualifications:
● 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot
● Proven experience leading or mentoring engineers; direct people-management a plus
● Expert knowledge of AWS services and cloud-native design patterns
● Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads
● Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks
● Excellent communication and stakeholder-management skills in a remote-fi rst environment
Nice-to-Have:
● Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning
● Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously
● Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company
● Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Job Title: Engineering Manager (Java / Spring Boot, AWS) – Remote
Leadership Role
Location: Remote
Employment Type: Full-time

We are seeking a highly skilled and motivated Full Stack Developer with strong proficiency in React.js for front-end development and Java (Spring Boot) for back-end services. The ideal candidate will be responsible for designing, developing, and maintaining scalable web applications, ensuring responsiveness and performance across the stack.
Key Responsibilities:
- Develop and maintain front-end web applications using React.js, Redux, Elastic Serach, TypeScript, HTML5, and CSS3.
- Design and implement RESTful APIs and microservices using Java, Spring Boot, and related frameworks.
- Collaborate with UI/UX designers, product managers, and QA to translate business requirements into technical solutions.
- Optimize applications for maximum speed and scalability.
- Integrate with third-party APIs, services, and databases.
- Write clean, maintainable, and testable code following best practices.
- Conduct code reviews, unit testing, and participate in system design.
- Troubleshoot and debug production issues as needed.
- Participate in Agile/Scrum development lifecycle including sprint planning, stand-ups, and retrospectives.
Job Title : Junior ELK Data Engineer
Experience Required : 3+ Years
Location : Bangalore (Work From Office / Hybrid as per project requirement)
Job Type : Full-Time
Joining : Immediate Joiners only
Job Summary :
We are seeking a Junior ELK Data Engineer with over 3 years of hands-on experience in the Elastic Stack (Elasticsearch, Logstash, Kibana, and Beats).
The ideal candidate will help design, develop, and optimize scalable data ingestion, indexing, and visualization solutions, contributing to the development of high-performance observability and analytics platforms for real-time monitoring and analysis.
Mandatory Skills :
Elastic Stack (Elasticsearch, Logstash, Kibana, Beats), real-time data ingestion, dashboard development, log processing, search optimization, system observability.
Key Responsibilities :
- Build and maintain data pipelines using Logstash, Beats, and Elasticsearch for real-time log ingestion and processing.
- Design and develop Kibana dashboards for effective visualization and alerting across various data sources.
- Optimize indexing strategies for large-scale distributed systems to ensure high search performance and reliability.
- Collaborate with DevOps and SRE teams to enable effective observability and monitoring solutions.
- Analyze system performance and troubleshoot issues related to logging and monitoring pipelines.
- Assist in configuring and maintaining ELK stack components in production and development environments.
Preferred Qualifications:
- Experience in handling distributed systems logs and metrics at scale.
- Familiarity with scripting (Python/Shell) for data manipulation or automation.
- Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.
- Understanding of containerized environments like Docker/Kubernetes.

What We’re Looking For:
Meltwater is a global leader in media intelligence and social analytics. Our mission is to help businesses make more informed decisions by providing them with actionable insights from the vast ocean of online data. With a diverse and talented team spread across the world, we are
committed to driving innovation and pushing the boundaries of what's possible in our field.
Meltwater is seeking a Full Stack Software Engineer to join our new influencer marketing team in our Hyderabad office.. We are looking for an individual who is not only technically proficient but also embodies the values of collaboration, open-mindedness, and a proactive approach to
problem-solving. As a Full Stack Software Engineer, you will play a key role in shaping our technology stack, working with cutting-edge technologies like React, Typescript, Next.js, Node.js & AWS ecosystem.
What You'll Do:
- Collaborate effectively with cross-functional teams to develop and maintain software solutions that meet business requirements.
- Lead the development of high-quality code, ensuring scalability, security, and performance.
- Continuously learn and adapt to new technologies, tools, and best practices.
- Identify and resolve technical challenges.
What You'll Bring:
- Bachelor's or Master's degree in Computer Science or related field.
- At least 3 years of experience as a Full Stack Software Engineer.
- Strong problem-solving skills and the ability to think critically.
- Experience with databases such as MySQL, Elasticsearch, etc.
- Experience with backend technologies, preferably NodeJS & Typescript.
- Experience with frontend technologies, preferably React & Typescript.
- Excellent communication and collaboration skills.
- Self-motivated with a passion for learning and self-improvement.
What We Offer:
- Opportunity to work on cutting-edge technologies and projects.
- A culture that values innovation, collaboration, and personal growth.
- A dynamic and diverse team with a global presence.

Company Overview
We are a dynamic startup dedicated to empowering small businesses through innovative technology solutions. Our mission is to level the playing field for small businesses by providing them with powerful tools to compete effectively in the digital marketplace. Join us as we revolutionize the way small businesses operate online, bringing innovation and growth to local communities.
Job Description
We are seeking a skilled and experienced Data Engineer to join our team. In this role, you will develop systems on cloud platforms capable of processing millions of interactions daily, leveraging the latest cloud computing and machine learning technologies while creating custom in-house data solutions. The ideal candidate should have hands-on experience with SQL, PL/SQL, and any standard ETL tools. You must be able to thrive in a fast-paced environment and possess a strong passion for coding and problem-solving.
Required Skills and Experience
- Minimum 5 years of experience in software development.
- 3+ years of experience in data management and SQL expertise – PL/SQL, Teradata, and Snowflake experience strongly preferred.
- Expertise in big data technologies such as Hadoop, HiveQL, and Spark (Scala/Python).
- Expertise in cloud technologies – AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR).
- Experience with queuing systems (e.g., SQS, Kafka) and caching systems (e.g., Ehcache, Memcached).
- Experience with container management tools (e.g., Docker Swarm, Kubernetes).
- Familiarity with data stores, including at least one of the following: Postgres, MongoDB, Cassandra, or Redis.
- Ability to create advanced visualizations and dashboards to communicate complex findings (e.g., Looker Studio, Power BI, Tableau).
- Strong skills in manipulating and transforming complex datasets for in-depth analysis.
- Technical proficiency in writing code in Python and advanced SQL queries.
- Knowledge of AI/ML infrastructure, best practices, and tools is a plus.
- Experience in analyzing and resolving code issues.
- Hands-on experience with software architecture concepts such as Separation of Concerns (SoC) and micro frontends with theme packages.
- Proficiency with the Git version control system.
- Experience with Agile development methodologies.
- Strong problem-solving skills and the ability to learn quickly.
- Exposure to Docker and Kubernetes.
- Familiarity with AWS or other cloud platforms.
Responsibilities
- Develop and maintain our inhouse search and reporting platform
- Create data solutions to complement core products to improve performance and data quality
- Collaborate with the development team to design, develop, and maintain our suite of products.
- Write clean, efficient, and maintainable code, adhering to coding standards and best practices.
- Participate in code reviews and testing to ensure high-quality code.
- Troubleshoot and debug application issues as needed.
- Stay up-to-date with emerging trends and technologies in the development community.
How to apply?
- If you are passionate about designing user-centric products and want to be part of a forward-thinking company, we would love to hear from you. Please send your resume, a brief cover letter outlining your experience and your current CTC (Cost to Company) as a part of the application.
Join us in shaping the future of e-commerce!

Job Title : Backend Developer (Node.js or Python/Django)
Experience : 2 to 5 Years
Location : Connaught Place, Delhi (Work From Office)
Job Summary :
We are looking for a skilled and motivated Backend Developer (Node.js or Python/Django) to join our in-house engineering team.
Key Responsibilities :
- Design, develop, test, and maintain robust backend systems using Node.js or Python/Django.
- Build and integrate RESTful APIs including third-party Authentication APIs (OAuth, JWT, etc.).
- Work with data stores like Redis and Elasticsearch to support caching and search features.
- Collaborate with frontend developers, product managers, and QA teams to deliver complete solutions.
- Ensure code quality, maintainability, and performance optimization.
- Write clean, scalable, and well-documented code.
- Participate in code reviews and contribute to team best practices.
Required Skills :
- 2 to 5 Years of hands-on experience in backend development.
- Proficiency in Node.js and/or Python (Django framework).
- Solid understanding and experience with Authentication APIs.
- Experience with Redis and Elasticsearch for caching and full-text search.
- Strong knowledge of REST API design and best practices.
- Experience working with relational and/or NoSQL databases.
- Must have completed at least 2 end-to-end backend projects.
Nice to Have :
- Experience with Docker or containerized environments.
- Familiarity with CI/CD pipelines and DevOps workflows.
- Exposure to cloud platforms like AWS, GCP, or Azure.

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Job Description
We are seeking a skilled DevOps Specialist to join our global automotive team. As DevOps Specialist, you will be responsible for managing operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Daily maintenance tasks on application availability, response times, pro-active incident tracking on system logs and resources monitoring
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
DevOps Skillset: Logfile analysis /troubleshooting (ELK Stack), Linux administration, Monitoring (App Dynamics, Checkmk, Prometheus, Grafana), Security (Black Duck, SonarQube, Dependabot, OWASP or similar)
Experience with Docker.
Familiarity with DevOps principles and ticket tools like ServiceNow.
Experience in handling confidential data and safety sensitive systems
Strong analytical, communication, and organizational abilities. Easy to work with.
Optional: Experience with our relevant business domain (Automotive / Manufacturing industry, especially production management systems). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
DevOps, Logfile Analysis, Troubleshooting, ELK Stack, Linux Administration, Monitoring, AppDynamics, Checkmk, Prometheus, Grafana, Security, Black Duck, SonarQube, Dependabot, OWASP, Docker, CI/CD, Jenkins, GitLab, Kubernetes, AWS, Azure, ServiceNow, Incident Management, Change Management, Problem Management, Documentation, Communication, Analytical Skills, Organizational Skills, SCRUM, ITIL, Automotive Industry, Manufacturing Industry, Production Management Systems.

We are seeking an experienced ELK Stack & APM Engineer to design, implement, and maintain our logging, monitoring, and application performance management infrastructure. The ideal candidate will have deep expertise in the Elastic Stack (Elasticsearch, Logstash, Kibana) and Application Performance Monitoring (APM).
Key Responsibilities
- Design, deploy, and maintain production-grade Elasticsearch clusters, ensuring high availability, performance, and scalability
- Implement and optimize log ingestion pipelines using Logstash and Beats
- Create and maintain Kibana dashboards, visualizations, and alerts for operational intelligence
- Configure and manage APM servers to monitor application performance metrics
- Develop and maintain data retention policies and implement data lifecycle management
- Troubleshoot performance issues and optimize cluster resources
- Implement security best practices and access controls across the ELK stack
- Automate deployment and configuration management using Infrastructure as Code
- Provide technical guidance and support to development teams for log integration
- Conduct capacity planning and resource optimization
Required Qualifications
- 2+ years of experience with Elasticsearch, including cluster management and optimization
- Strong knowledge of Logstash configuration, pipeline development, and data transformation
- Expertise in creating Kibana visualizations, dashboards, and implementing alerting
- Experience with APM implementation and troubleshooting
- Proficiency in one or more scripting languages (Python, Ruby, Bash)
- Strong understanding of logging architectures and best practices
- Familiarity with monitoring tools and observability practices
Who You Are:
- You’ve built scalable, production-ready APIs in fast-paced environments, and are comfortable with the challenges that come with a growing startup.
- You understand event-driven architectures and have hands-on experience building real-time applications.
- You are fully capable of taking ownership of backend systems, from designing the database schema to writing efficient, maintainable code.
- You think beyond the code: optimizing performance, scalability, and security are as important as building new features.
- You’re comfortable troubleshooting complex issues, whether that’s API bottlenecks, database performance, or production issues.
Responsibilities:
- Design, implement, and manage a NestJS backend with a modular, scalable architecture that can easily handle real-time events and communication.
- Develop and maintain WebSocket services for real-time event-driven updates and ensure seamless communication across the system.
- Integrate RabbitMQ for reliable internal messaging, handling queues and ensuring event-driven workflows are efficient and fault-tolerant.
- Implement database management strategies using MongoDB, Redis, and Elasticsearch, ensuring efficient data handling, indexing, and optimization.
- Integrate with external APIs (JSON-RPC, XML-RPC, REST) to enhance system capabilities.
- Ensure high performance of the system by optimizing database queries, implementing caching strategies using Redis, and ensuring optimal indexing and data flow.
- Implement role-based authentication and authorization mechanisms using JWT, OAuth, and RBAC patterns within NestJS.
- Follow security best practices to ensure sensitive data is protected, credentials are stored securely, and the system is resilient to common vulnerabilities.
- Work collaboratively with cross-functional teams to ensure seamless integration between different services and technologies.
Must-Have Skills:
- Strong proficiency in Node.js and NestJS with TypeScript.
- Solid experience with MongoDB, Redis, and Elasticsearch for data storage and real-time data handling.
- In-depth knowledge of WebSockets and Socket.IO in NestJS Gateways, enabling real-time communication and updates.
- Experience with RabbitMQ for message queuing and asynchronous task handling.
- Strong understanding of authentication & authorization systems using JWT, OAuth, and RBAC (Role-Based Access Control).
- Expertise in optimizing APIs, including techniques for caching, improving database performance, and reducing latency.
- Familiar with API security best practices, including secure storage of credentials, encryption, and safeguarding sensitive data.
Key Skills:
- TypeScript, NestJS, WebSockets, Socket.io
- Redis, MongoDB, Elasticsearch, RabbitMQ
- API Optimization, JWT, OAuth, RBAC
- Real-time communication, Event-driven architecture
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.

What we want to accomplish and why we need you?
Jio Haptik is an AI leader having pioneered AI-powered innovation since 2013. Reliance Jio Digital Services acquired Haptik in April 2019. Haptik currently leads India’s AI market having become the first to process 15 billion+ two-way conversations across 10+ channels and in 135 languages. Haptik is also a Category Leader across platforms including Gartner, G2, Opus Research & more. Recently Haptik won the award for “Tech Startup of the Year” in the AI category at Entrepreneur India Awards 2023, and gold medal for “Best Chat & Conversational Bot” at Martequity Awards 2023. Haptik has a headcount of 200+ employees with offices in Mumbai, Delhi, and Bangalore.
What will you do everyday?
As a backend engineer you will be responsible for building the Haptik platform which is used by people across the globe. You will be responsible for developing, architecting and scaling the systems that support all the functions of the Haptik platform. While you know how to work hard, you also know how to have fun at work and make friends with your colleagues.
Ok, you're sold, but what are we looking for in the perfect candidate?
Develop and maintain expertise in backend systems and API development, ensuring seamless integrations and scalable solutions, including:
- Strong expertise in backend systems, including design principles and adherence to good coding practices.
- Proven ability to enhance or develop complex tools at scale with a thorough understanding of system architecture.
- Capability to work cross-functionally with all teams, ensuring seamless implementation of APIs and solutioning for various tools.
- Skilled in high-level task estimation, scoping, and breaking down complex projects into actionable tasks.
- Proficiency in modeling and optimizing database architecture for enhanced performance and scalability.
- Experience collaborating with product teams to build innovative Proof of Concepts (POCs).
- Ability to respond to data requests and generate reports to support data-driven decision-making.
- Active participation in code reviews, automated testing, and quality assurance processes.
- Experience working in a scrum-based agile development environment.
- Commitment to staying updated with technology standards, emerging trends, and software development best practices.
- Strong verbal and written communication skills to facilitate collaboration and clarity.
Requirements*:
- A minimum of 5 years of experience in developing scalable products and applications.
- Must Have Bachelor's degree in Computer Engineering or related field.
- Proficiency in Python and expertise in at least one backend framework, with a preference for Django.
- Hands-on experience designing normalized database schemas for large-scale applications using technologies such as MySQL, MongoDB, or Elasticsearch.
- Practical knowledge of in-memory data stores like Redis or Memcached.
- Familiarity with working in agile environments and exposure to tools like Jira is highly desirable.
- Proficiency in using version control systems like Git.
- Strong communication skills and the ability to collaborate effectively in team settings.
- Self-motivated with a strong sense of ownership and commitment to delivering results.
- Additional knowledge of RabbitMQ, AWS/Azure services, Docker, MQTT, Lambda functions, Cron jobs, Kibana, and Logstash is an added advantage.
- Knowledge of web servers like Nginx/Apache is considered a valuable asset.
* Requirements is such a strong word. We don’t necessarily expect to find a candidate that has done everything listed, but you should be able to make a credible case that you’ve done most of it and are ready for the challenge of adding some new things to your resume.
Tell me more about Haptik
- On a roll: Announced major strategic partnership with Jio.
- Great team: You will be working with great leaders who have been listed in Business World 40 Under 40, Forbes 30 Under 30 and MIT 35 Under 35 Innovators.
- Great culture: The freedom to think and innovate is something that defines the culture of Haptik. Every person is approachable. While we are working hard, it is also important to take breaks to not get too worked up.
- Huge market: Disrupting a massive, growing chatbot market. The global market is projected to attain a valuation of US $0.94 bn by the end of 2024 progressing from US $0.11 bn earned in 2015.
- Great customers: Businesses across industries - Samsung, HDFCLife, Times of India are some that have relied on Haptik's Conversational AI solutions to engage, acquire, service and understand customers.
- Impact: A fun and exciting start-up culture that empowers its people to make a huge impact.
Working hard for things that we don't care about is stress, but working hard for something we love is called passion! At Haptik we passionately solve problems in order to be able to move faster and don't shy away from breaking things!

Position Name : Senior Software Architect
📍 Location : UB City, Bengaluru (Hybrid – 3 days in office)
🕒 Experience : 11 to 18 Years
📅 Notice Period : Immediate to 1 month
👥 Open Positions : 2
Role Overview :
- We are looking for a Senior Software Architect to design, build, and scale high-performance SaaS B2B applications.
- The ideal candidate will have deep expertise in MERN stack (MongoDB, Express.js, React.js, Node.js), AWS, and microservices-based architectures.
- This role requires at least 3 Years of experience in an Architect position, with a strong background in building scalable products and handling daily releases.
Key Responsibilities :
- Architect and develop scalable SaaS B2B products using React, Node.js, GraphQL, Elasticsearch, and Micro Frontend Architecture (MFE).
- Design and implement microservices-based distributed systems and RESTful APIs.
- Optimize frontend interfaces using React, Redux, Next.js, HTML, and CSS.
- Develop robust backend APIs using Node.js, Express.js, and MongoDB/PostgreSQL.
- Utilize AWS services (EC2, S3, SQS, SNS, DocumentDB, OpenSearch) and containerization (Docker, Kubernetes).
- Implement scalable database schemas and ensure optimal performance.
- Work with GraphQL for efficient data querying and manipulation.
- Ensure security, reliability, and high availability of the platform.
- Lead and mentor development teams, conduct code reviews, and enforce best practices.
- Collaborate with cross-functional teams to deliver business-driven software solutions.
Required Skills & Experience :
✅ 3+ Years as a Software Architect and currently in an Architect role.
✅ 5+ Years of experience in full-stack development with the MERN Stack.
✅ Strong knowledge of scalable architectures, microservices, and cloud-native SaaS products.
✅ Experience in AWS deployment, cloud infrastructure, and DevOps.
✅ Hands-on experience with Micro Frontends (MFE).
✅ Experience in handling everyday releases and working in Agile environments.
✅ Strong problem-solving skills, logical thinking, and architectural decision-making.
✅ Bachelor’s or Master’s degree in Computer Science or related field.
Preferred :
- Experience in B2B SaaS product development.
- Background in product-based companies.
- No prior experience in Walmart or similar large enterprises.
- Candidates should be based in Bengaluru (Outstation candidates will not be processed).


Job Role: Senior Full Stack Developer
Location: Trichy
Job Type: Full Time
Experience Required: 5+ Years
Reporting to : Product Head
About Us:
At Zybisys Consulting Services LLP, we are a leading company in Cloud Managed Services and Cloud Computing. We believe in creating a vibrant and inclusive workplace where talented people can grow and succeed. We are looking for a dedicated leader who is passionate about supporting our team, developing talent, and enhancing our company culture.
Role Overview:
Are you a seasoned Full Stack Developer with a passion for crafting innovative solutions? We are looking for an experienced Senior Full Stack Developer to enhance our team and lead the development of innovative solutions.
Key Responsibilities:
- Develop and Maintain Applications: Design, develop, and maintain scalable and efficient full-stack applications using modern technologies.
- Database Design: Expertise in both relational and NoSQL databases, including schema design, query optimization, and data modeling.
- Collaborate with Teams: Work closely with front-end and back-end developers along with the Engineering team to integrate and optimize APIs and services.
- Implement Best Practices: Ensure high-quality code, adherence to best practices, and efficient use of technologies.
- Troubleshoot and Debug: Identify and resolve complex issues, providing solutions and improvements.
- Code Review and Quality Assurance: Skill in reviewing code, ensuring adherence to coding standards, and implementing best practices for software quality.
- Agile Methodologies: Experience with Agile frameworks (e.g., Scrum, Kanban) to facilitate iterative development and continuous improvement.
- Test-Driven Development (TDD): Knowledge of TDD practices, writing unit tests, and integrating automated testing (CI/CD) into the development workflow.
- Technical Documentation: Ability to write clear and concise technical documentation for codebases, APIs, and system architecture.
Technical Skills:
- Backend: Node.js, Express.js, Python, Golang, gRPC
- Frontend: React.js, Next.js, HTML, HTML5, CSS3, jQuery
- Database: MongoDB, MySQL, Redis, OpenSearch
- API : RESTful APIs, SOAP services, or GraphQL
- Tools & Technologies: Docker, Git, Kafka
- Design & Development: Figma, Linux
- Containers & container orchestration: Docker, Kubernetes
- Networking & OS Knowledge
What We Offer:
- Growth Opportunities: Expand your skills and career within a forward-thinking company.
- Collaborative Environment: Join a team that values innovation and teamwork.
If your ready to take on exciting challenges and work in a collaborative environment, wed love to hear from you!
Apply now to join our team as a Senior Full Stack Developer and make waves with your skills!


Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer.
Engineering at Innovaccer
With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world.
About the Role
We at Innovaccer are looking Software Development Engineer-II (Fullstack) to build the most amazing product experience. You’ll get to work with other engineers to build delightful feature experiences to
understand and solve our customer’s pain points
A Day in the Life
● Building efficient and reusable applications and abstraction
● Identify and communicate best practices.
● Participate in the project life-cycle from pitch/prototyping through definition and design to build, integration, and delivery
● Analyse and improve the performance, scalability, stability, and security of the product
● Improve engineering standards, tooling, and processes
What You Need
● 2-5 years of experience with a start-up mentality and a high willingness to learn
● Expertise in Python/NodeJS
● Experience working in Web Development Frameworks (Express/Django or Flask)
● Experience working in teams of 3-10 people.
● Knowledge of Relational Databases
Nice to have
● Experience working in FE (JS + React)
● Experience in Cloud (AWS)
● Experience in Terraform
We offer competitive benefits to set you up for success in and outside of the work
.
Here’s What We Offer
● Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days.
● Parental Leave: Experience one of the industry's best parental leave policies to spend time with your
new addition.
● Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just
take a break? We've got you covered.
● Health Insurance: We offer health benefits and insurance to you and your family for medically related
expenses related to illness, disease, or injury.
● Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from
home. Bring your furry friends with you to the office and let your colleagues become their friends, too.
*Noida office only
● Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche
facility that puts your child's well-being first. *India offices
Where and how we work
Our Noida office is situated in a posh space, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and
collaborate effectively within our team. Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered.
Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing
employment with us. We do not guarantee job spots or engage in any financial transactions related to
employment. If you encounter any posts or requests asking for payment or personal information, we strongly
advise you to report them immediately to our HR department at px@innova. Additionally, please
exercise caution and verify the authenticity of any requests before disclosing personal and confidential
information, including bank account details
.
About Innovaccer
Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data
across systems and care settings and empowers healthcare organizations with scalable, modern applications
that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EHR-agnostic solutions
have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery
transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life
sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people
and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated
Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by
Black Book. For more information, please visit innovaccer.com.
Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com
A) Skills Required
Essential Skills (Two top
skills)
3 possible combinations.
1. Candidate having expertise in both ElasticSearch and Kafka, preferably.
OR
2. Candidate having expertise in ElasticSearch and willing to learn Kafka.
OR
3. Candidate having expertise in Kafka and willing to learn ElasticSearch.
B) Other Information
Educational Qualifications Graduate
Experience Mid-Level (6+ years)
Minimum Qualifications:
ElasticSearch/OpenSearch
· Software Lifecycle/programing skills
· Linux
· Python
· Ingestion tools (logstash, OpenSearch Ingestion, Fluentd, fluentbit, Harness,
CloudFormation, container, images, ECS, lambda).
· SQL query
· Json
· AWS knowledge
Kafka/MSK
· Linux
· In-depth understanding of Kafka broker configurations, zookeepers, and
connectors
· Understand Kafka topic design and creation.
· Good knowledge in replication and high availability for Kafka system
· Good understanding of producers and consumer group
· Understanding Kafka partitions and scaling up
· Kafka latency/lag and throughput
· Integrating Kafka connect with various data sources can be internal or external.
· Kafka security using SSL/Certs


Job Title : MERN Stack Developer
Experience : 5+ Years
Shift Timings : 8:00 AM to 5:00 PM
Role Overview:
We are hiring a skilled MERN Stack Developer to build scalable web applications. You’ll work on both front-end and back-end, leveraging modern frameworks and cloud technologies to deliver high-quality solutions.
Key Responsibilities :
- Develop responsive UIs using React, GraphQL, and TypeScript.
- Build back-end APIs with Node.js, Express, and MySQL.
- Integrate AWS services like Lambda, S3, and API Gateway.
- Optimize deployments using AWS CDK and CloudFormation.
- Ensure code quality with Mocha/Chai/Sinon, ESLint, and Prettier.
Required Skills :
- Strong experience with React, Node.js, and GraphQL.
- Proficiency in AWS services and Infrastructure as Code (CDK/Terraform).
- Familiarity with MySQL, Elasticsearch, and modern testing frameworks.

Job description for Python/Backend Developer
We are actively looking for backend software engineers who are passionate about building cutting-edge systems that work on the latest tech stack (Python, Django) but also help save lives. You’ll have the opportunity to learn and lead the development of several AI-enabled products and solutions within the company that are geared to help accelerate the development of new cures and to reduce the inefficiencies in how healthcare information is managed.
Key Responsibilities:
- Design, develop, and deploy scalable APIs using Python/Django.
- Integrate third-party APIs like Facebook page API, Google Business API, and all other social APIs (10+)
- Collaborate with cross-functional teams to define, design, and ship new features.
- Write clean, maintainable, and testable code.
- Develop and maintain authentication mechanisms, including OAuth, JWT, and SSO integration with third-party providers.
- Optimize and maintain existing APIs for performance and scalability.
Required Skills and Qualifications:
- Education: Bachelor’s degree in Computer Science, Information Technology, or related field.
- Experience:
- 2+ years of experience in Python development.
- Proven experience in designing and developing RESTful APIs.
- Advanced proficiency in Python programming.
- Strong experience with databases (e.g., MySQL, Elasticsearch).
- Hands-on experience with SSO protocols and implementation (e.g., OAuth, SAML, OpenID Connect).
- Experience integrating SSO with third-party providers.
- Proficiency in using JSON Web Tokens (JWT) for secure data exchange
- Technical Skills:
- Proficiency in Python and Python frameworks (Django, Flask).
- Solid understanding of web technologies (HTTP, SSL/TLS, JSON, XML).
- Familiarity with API documentation tools (e.g., Swagger, Postman).
- Experience with version control systems (e.g., Git).
- Expertise in authentication and authorization methods.
- Ability to write clean, maintainable, and efficient code following best practices.
- Experience in writing unit tests for code to ensure reliability and maintainability

We are looking to expand our existing Python team across our offices in Surat. This position is for SDE-1 - Junior Software Engineer.
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
PortOne is re−imagining payments in Korea and other international markets. We are a Series B funded startup backed by prominent VC firms Softbank and Hanwa Capital
PortOne provides a unified API for merchants to integrate with and manage all of the payment options available in Korea and SEA Markets - Thailand, Singapore, Indonesia etc. It's currently used by 2000+ companies and processing multi-billion dollars in annualized volume. We are building a team to take this product to international markets, and looking for engineers with a passion for fintech and digital payments.
Culture and Values at PortOne
- You will be joining a team that stands for Making a difference.
- You will be joining a culture that identifies more with Sports Teams rather than a 9 to 5 workplace.
- This will be remote role that allows you flexibility to save time on commute
- Your will have peers who are/have
- Highly Self Driven with A sense of purpose
- High Energy Levels - Building stuff is your sport
- Ownership - Solve customer problems end to end - Customer is your Boss
- Hunger to learn - Highly motivated to keep developing new tech skill sets
Who you are ?
* You are an athlete and Devops/DevSecOps is your sport.
* Your passion drives you to learn and build stuff and not because your manager tells you to.
* Your work ethic is that of an athlete preparing for your next marathon. Your sport drives you and you like being in the zone.
* You are NOT a clockwatcher renting out your time, and NOT have an attitude of "I will do only what is asked for"
* Enjoys solving problems and delight users both internally and externally
* Take pride in working on projects to successful completion involving a wide variety of technologies and systems
* Posses strong & effective communication skills and the ability to present complex ideas in a clear & concise way
* Responsible, self-directed, forward thinker, and operates with focus, discipline and minimal supervision
* A team player with a strong work ethic
Experience
* 2+ year of experience working as a Devops/DevSecOps Engineer
* BE in Computer Science or equivalent combination of technical education and work experience
* Must have actively managed infrastructure components & devops for high quality and high scale products
* Proficient knowledge and experience on infra concepts - Networking/Load Balancing/High Availability
* Experience on designing and configuring infra in cloud service providers - AWS / GCP / AZURE
* Knowledge on Secure Infrastructure practices and designs
* Experience with DevOps, DevSecOps, Release Engineering, and Automation
* Experience with Agile development incorporating TDD / CI / CD practices
Hands on Skills
* Proficient in atleast one high level Programming Language: Go / Java / C
* Proficient in scripting - bash scripting etc - to build/glue together devops/datapipeline workflows
* Proficient in Cloud Services - AWS / GCP / AZURE
* Hands on experience on CI/CD & relevant tools - Jenkins / Travis / Gitops / SonarQube / JUnit / Mock frameworks
* Hands on experience on Kubenetes ecosystem & container based deployments - Kubernetes / Docker / Helm Charts / Vault / Packer / lstio / Flyway
* Hands on experience on Infra as code frameworks - Terraform / Crossplane / Ansible
* Version Control & Code Quality: Git / Github / Bitbucket / SonarQube
* Experience on Monitoring Tools: Elasticsearch / Logstash / Kibana / Prometheus / Grafana / Datadog / Nagios
* Experience with RDBMS Databases & Caching services: Postgres / MySql / Redis / CDN
* Experience with Data Pipelines/Worflow tools: Airflow / Kafka / Flink / Pub-Sub
* DevSecOps - Cloud Security Assessment, Best Practices & Automation
* DevSecOps - Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Preferrable to have Devops/Infra Experience for products in Payments/Fintech domain - Payment Gateways/Bank integrations etc
What will you do ?
Devops
* Provisioning the infrastructure using Crossplane/Terraform/Cloudformation scripts.
* Creating and Managing the AWS EC2, RDS, EKS, S3, VPC, KMS and IAM services, EKS clusters & RDS Databases.
* Monitor the infra to prevent outages/downtimes and honor our infra SLAs
* Deploy and manage new infra components.
* Update and Migrate the clusters and services.
* Reducing the cloud cost by enabling/scheduling for less utilized instances.
* Collaborate with stakeholders across the organization such as experts in - product, design, engineering
* Uphold best practices in Devops/DevSecOps and Infra management with attention to security best practices
DevSecOps
* Cloud Security Assessment & Automation
* Modify existing infra to adhere to security best practices
* Perform Threat Modelling of Web/Mobile applications
* Integrate security testing tools (SAST, DAST) in to CI/CD pipelines
* Incident management and remediation - Monitoring security incidents, recovery from and remediation of the issues
* Perform frequent Vulnerabiltiy Assessments/Penetration Testing for Web, Network and Mobile applications
* Ensure the environment is compliant to CIS, NIST, PCI etc.
Here are examples of apps/features you will be supporting as a Devops/DevSecOps Engineer
* Intuitive, easy-to-use APIs for payment process.
* Integrations with local payment gateways in international markets.
* Dashboard to manage gateways and transactions.
* Analytics platform to provide insights
Job Title: ELK Stack Engineer
Position Overview:
We are seeking a skilled and experienced Senior Elasticsearch Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining data pipelines using the ELK Stack (Elasticsearch 7.1 or higher, Logstash, Kibana). The Senior Elasticsearch Developer will play a key role in managing large datasets, optimizing complex queries, and maintaining self-hosted Elastic Search clusters.
Responsibilities:
- Design, develop, and maintain data pipelines using the ELK Stack.
- Handle large datasets (3 billion+ records, 100+ fields per index) with efficiency and accuracy.
- Develop and optimize complex Elasticsearch queries for efficient data retrieval and analysis.
- Manage and maintain self-hosted Elastic Search clusters.
- Implement DevOps practices, including containerization with Docker.
- Perform backups, disaster recovery, and index migrations to ensure data security and integrity.
- Execute data processing and cleaning techniques, including writing custom scripts to extract and transform data into usable formats.
- Collaborate with the engineering team to integrate Elastic Search with other systems.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field.
- Minimum 5+ years of experience in designing, developing, and maintaining data pipelines using the ELK Stack.
- Hands-on experience handling large datasets (3 million+ records, 50+ fields per index).
- Strong proficiency in developing and optimizing complex Elasticsearch queries.
- Experience managing and maintaining self-hosted ElasticSearch clusters.
- Proficiency in DevOps practices, including containerization with Docker.
- Knowledge of security best practices, including backups, disaster recovery, and index migrations.
- Experience with database administration and data processing techniques.
- Excellent communication and teamwork skills.
Benefits:
- Competitive salary and benefits package between 20 to 35 LPA.
- Opportunities for professional development and growth.
- Collaborative and inclusive work environment.
- Flexible work hours and remote work options.

Job Description
We are looking for a passionate Search Specialist Backend Engineer to join our team. This role will focus on improving and optimizing our search capabilities to enhance user experience, scalability, and relevancy.
Location - Bangalore
Designation - Senior Software engineer - Search Specialist
Responsibilities:
● Design, develop, and maintain the search application, ensuring performance, and scalability.
● Collaborate with cross-functional teams to define and implement search features and improvements.
● Ensure search results are relevant by employing techniques like ranking, personalization, and recommendation.
● Work on complex problems related to search algorithms, data structures, and distributed systems.
● Implement logging, metrics, and monitoring for search services.
● Optimize search by tuning the underlying algorithms, experimenting with new techniques, and leveraging tools like Elasticsearch, Solr, etc.
● Maintain and improve existing search functionalities while ensuring backward compatibility.
● Stay updated with the latest advancements in search technology and industry best practices.
Basic Qualifications:
● Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
● Experience with search engines like Elasticsearch, Solr, or similar technologies.
● Solid understanding of algorithms, data structures, and distributed systems.
● Proficiency in Python and Django.
● Familiarity with RESTful APIs and backend services.
● Preferred Qualifications:
● Experience with natural language processing (NLP) or machine learning as applied to search.
● Knowledge of various search relevance techniques and ranking algorithms.
● Experience in a cloud environment (e.g., AWS, Google Cloud, Azure). ● Familiarity with containerization technologies such as Docker and Kubernetes.
● Strong analytical and debugging skills. Personal Attributes:
● Strong communication skills and ability to collaborate effectively in a team setting.
● A keen interest in improving user experience through search.
● Proactive, self-motivated, and able to work in a fast-paced environment


DocNexus is revolutionizing the global medical affairs & commercial ecosystem with search. We provide a next-generation data platform that simplifies searching through millions of insights, publications, clinical trials, payments, and social media data within seconds to identify healthcare professionals (HCPs), products, manufacturers, and healthcare systems. Leveraging AI-powered Knowledge Graphs, DocNexus assists life science organizations in finding the right key opinion leaders (KOL/DOLs) who play a crucial role in developing and bringing life-saving pharmaceutical products and medical devices to market. Backed by industry leaders such as Techstars, JP Morgan, Mass Challenge, and recognized as one of the Top 200 Most Innovative Startups by TechCrunch Disrupt, we are committed to transforming healthcare insights. We are seeking a skilled and passionate DevOps Engineer to join our dynamic team and contribute to the efficient development, deployment, and maintenance of our platform.
We are looking for a visionary Sr. Full Stack Engineering Lead who is passionate about building and leading our technology department. The ideal candidate will have a solid technical background and experience in leading a team to drive innovation and growth. As Engineering Lead, you will oversee the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase business.
Leadership and Strategy:
- Lead the engineering team and make strategic decisions regarding the technology stack, project management, and resource allocation.
- Establish the company’s technical vision and lead all aspects of technological development.
Development:
- Develop and maintain the front-end and back-end of web applications.
- Ensure the performance, quality, and responsiveness of applications.
- Collaborate with a team to define, design, and ship new features.
Maintenance and Optimization:
- Maintain code integrity and organization.
- Identify and correct bottlenecks and fix bugs.
- Continually work on optimizing the performance of different applications.
Security: Ensure the security of the web applications by integrating security best practices.
- Regularly update the system to protect against vulnerabilities.
Innovation:
- Research and implement new technologies and frameworks that can improve the performance and user experience of the platform.
- Stay informed on emerging technologies and trends that can potentially impact the company's products and services.
Collaboration and Communication:
- Work closely with other departments to understand their needs and translate them into technical solutions.
- Communicate technology strategy to partners, management, investors, and employees.
Project Management:
- Oversee and support project planning, deadlines, and progress.
- Ensure that the technology standards and best practices are maintained across the organization.
Mentoring and Team Building:
- Foster a culture of innovation and excellence within the technology team.
- Mentor and guide the professional and technical development of team members.
Front-End Development:
- HTML/CSS: For structuring and styling the web pages.
- JavaScript/TypeScript: Core scripting language, along with frameworks like Angular, React, or Vue.js for dynamic and responsive user interfaces.
Back-End Development:
- Python: Using frameworks like Django or Flask for server-side logic.
- Node.js: JavaScript runtime environment for building scalable network applications.
- Ruby on Rails: A server-side web application framework written in Ruby.
Database Management:
- SQL Databases: MySQL, PostgreSQL for structured data storage.
- NoSQL Databases: MongoDB, Cassandra for unstructured data or specific use cases.
Server Management:
- Nginx or Apache: For server and reverse proxy functionalities.
- Docker: For containerizing applications and ensuring consistency across multiple development and release cycles.
- Kubernetes: For automating deployment, scaling, and operations of application containers.
DevOps and Continuous Integration/Continuous Deployment (CI/CD):
- Git: For version control.
- Jenkins, Travis CI, or CircleCI: For continuous integration and deployment.
- Ansible, Chef, or Puppet: For configuration management.
Cloud Services:
- AWS: For various cloud services like computing, database storage, content delivery, etc.
- Serverless Frameworks: Such as AWS Lambda or Google Cloud Functions for running code without provisioning or managing servers.
Security:
- OAuth, JWT: For secure authentication mechanisms.
- SSL/TLS: For secure data transmission.
- Various Encryption Techniques: To safeguard sensitive data.
Performance Monitoring and Testing:
- Selenium, Jest, or Mocha: For automated testing.
- New Relic or Datadog: For performance monitoring.
Data Science and Analytics:
- Python Libraries: NumPy, Pandas, or SciPy for data manipulation and analysis.
- Machine Learning Frameworks: TensorFlow, PyTorch for implementing machine learning models.
Other Technologies:
- GraphQL: For querying and manipulating data efficiently.
- WebSockets: For real-time bi-directional communication between web clients and servers.
Job title : Backend Engineer
Location : Remote
Job Description
- Responsibilities:
- Execute on the software development strategy to improve our dynamic highly distributed system
- Understand and implement software development/engineering lifecycle concepts to drive features from conception to delivery
- Collaborate closely with the product management, architects and dev-ops to achieve quality releases
- Work hand-in-hand with customer support, documentation and downstream teams to enable customer success
- Make appropriate trade-offs to optimize time-to-release while maintaining performance and scalability requirements
- Be able to clearly communicate goals and desired outcomes to internal project teams
- Interview, mentor and coach new team members
Skills :
Requirements:
- BS/MS in Computer Science/Engineering with 8 years or equivalent experience
- Be a self-starter, able to learn independently and adapt quickly
- Advanced-level of experience as hands-on Core Java Software Engineer in a distributed/cloud-based product
- Solid experience with Spring framework, Rest API, MongoDB, ElasticSearch, Kubernetes and Docker
- Cloud Experience (AWS, Google Cloud, Azure)
- Strong experience and knowledge with Micro services, distributed processing systems, and performance optimization
- Experience with Agile development process and embrace Agile methodologies
- Strong believer of automation test and strive for higher code coverage
- Can-do attitude on problem-solving, quality and ability to execute
Education
- BS/MS in Computer Science/Engineering with 5 years or equivalent experience


EXPERTISE AND QUALIFICATIONS
- 14+ years of experience in Software Engineering with at least 6+ years as a Lead Enterprise Architect preferably in a software product company
- High technical credibility - ability to lead technical brainstorming, take decisions and push for the best solution to a problem
- Experience in architecting Microservices based E2E Enterprise Applications
- Experience in UI technologies such as Angular, Node.js or Fullstack technology is desirable
- Experience with NoSQL technologies (MongoDB, Neo4j etc.)
- Elastic Search, Kibana, ELK, Logstash.
- Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
- Exposure in SaaS cloud-based platform.
- Experience on Docker, Kubernetes etc.
- Experience in planning, designing, developing and delivering Enterprise Software using Agile Methodology
- Key Programming Skills: Java, J2EE with cutting edge technologies
- Hands-on technical leadership with proven ability to recruit and mentor high performance talents including Architects, Technical Leads, Developers
- Excellent team building, mentoring and coaching skills are a must-have
- A proven track record of consistently setting and achieving high standards
Five Reasons Why You Should Join Zycus
1. Cloud Product Company: We are a Cloud SaaS Company, and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React.
2. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites.
3. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization
4. Get a Global Exposure: You get to work and deal with our global customers.
5. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.
About Us
Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.
Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-
to-use user interface ensures high adoption and value across the organization.
Start your #CognitiveProcurement journey with us, as you are #MeantforMore
About the Role
As a result of our rapid growth, we are looking for a Java Backend Engineer to join our existing Cloud Engineering team and take the lead in the design and development of several key initiatives of our existing Miko3 product line as well as our new product development initiatives.
Responsibilities
- Designing, developing and maintaining core system features, services and engines
- Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of a conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility
- Ensure the developed backend infrastructure is optimized for scale and responsiveness
- Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project.
- Introducing new ideas, products, features by keeping track of the latest developments and industry trends
- Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules
Requirements
- Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms)
- Proficiency in working on Linux based Operating system.
- Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus
- Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java).
- Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks.
- Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc.
- Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc.
- Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming).
- Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling)
- Experience developing cloud software services and an understanding of design for scalability, performance and reliability
- Good understanding of networking and communication protocols, and proficiency in identification CPU, memory & I/O bottlenecks, solve read & write-heavy workloads.
- Proficiency is concepts of monolithic and microservice architectural paradigms.
- Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc.
- Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb
- Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus
- Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus
- Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus
- Proficiency in functional programming ( Scala ) languages is an added plus
- Proficiency in working with NoSQL/graph databases is an added plus
- Proficient understanding of code versioning tools, such as Git is an added plus
- Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus
- Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus
- Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus
- Working Knowledge of container orchestration platform like Kubernetes is an added plus



About us: EITACIES Inc is a Product Development and IT Services company, providing pioneer services in Digital transformation, Cloud & Cyber security, DevSecops, AI & ML, Business Intelligence and Enterprise Integration. We have been supporting multiple bay area start-ups and Fortune 500 companies in different industry verticals since 2008. For more information please visit www.eitacies.com
We are looking for a Automation Software Engineer to join the core team that is building our latest cloud security product - Prisma SaaS. This fast-growing cloud service provides next- generation security for enterprise SaaS applications such as Box, Dropbox, GitHub, Google Apps, Slack, Salesforce and many more. Prisma SaaS enables organizations to store terabytes of sensitive data in these applications while preventing any security threats to their cloud. This role will also give you a unique opportunity to collaborate with many of our supported SaaS vendors and build skills to influence every aspect of delivering an enterprise-class cloud security service.
Bring your backend java cloud engineering skills to work on the latest cloud software/web applications. Help us deploy and scale the next generation of cloud security utilizing big data. Keys to Success: Experience building Micro Service, REST API, Big Query, Elastic-search, and AWS SQS.
With a great fit having:
● Strong expertise with React
● Strong expertise with RSpec
● Strong experience with Python
● Experience in End to End Testing automation for UI
● Experience in Unit/Integration testing of APIs
● Strong expertise with MongoDB and ElasticSearch
● Working knowledge of Docker
● Strong experience with open source tooling
● Bachelor’s Degree and/or equivalent relevant experience in a technical field
Your Impact:
- Responsible for complete software development process including End-to-End Automation for UI
- Write clean, testable, readable, scalable and maintainable code that scales and performs well for thousands of customers.
- Participate actively and contribute to design and development discussions.
- Develop proven understanding and be able to explain advanced Cloud Computing and Cloud Security concepts to others
Your Experience:
● Strong expertise in latest UI technology
● Strong Experience in Python
● Collaborate with UX/UI designers and product designers to build user-friendly, immersive, reactive applications
● 5+ years of software data integration experience.
● 5+ years of Javascript, HTML, and CSS.
● 5+ Working knowledge of React, Angular or an equivalent MVC framework.
● 5+ Experience with the API toolset: REST, HTTP, GraphQL, JSON, XML, Postman, etc.
● Experience in SQL, MongoDB, ElasticSearch.
● Experience with Git, Continuous Integration, and Continuous Delivery mechanisms
● Experience with RSpec.
● Big Plus if you have CASB or general SaaS application experience.
● Big plus if you have experience with Data Security application.
The notice period is 30 days or less.


Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.



- Understand fundamental design principles and best practices for developing backend servers and web applications Gather requirements, scope functionality, estimate and translate those requirements into solutions.
- Implement and integrate software features as per requirements.
- Deliver across the entire app life cycle.
- Work in a product creation project and/or technology project with implementation or integration responsibilities, Improve an existing code base, if required, and ability to read source code to understand data flow and origin
- Design effective data storage for the task at hand and know how to optimize query performance along the way.
- Follow an agile methodology of development and delivery
- Strictly adhere to coding standards and internal practices; must be able to conduct review code Mentor and possibly lead junior developers
- Contribute towards innovation Performance optimization of apps
- Explain technologies and solutions to technical and non-technical stakeholders
- Diagnose bugs and other issues in products
- Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Must have / Good to have:
- 4+ years experience with Core Python development Design and implementation of high-availability, and performant applications on Unix environment
- Good with multithreading and data structures
- Develop back-end components to improve responsiveness and overall performance
- Familiarity with database design, integration wiht applications and python packaging. Familiarity with front-end technologies (like JavaScript and HTML5), REST API, security considerations
- Familiarity with functional testing and deployment automation frameworks
- Experience in development for 3-4 production ready application using Python as programming language
- Experience in writing unit test cases including positive and negative test cases
- Experience of CI/CD pipeline code deployment (Git, SVN, Jenkins or Teamcity)
- Experience with Agile and DevOps methodology
- Very good problem-solving skills
- Experience with Web technologies is a plus
- Experience with ELK stack is a plus.


Full Stack Developer - Responsibilities:
- Perform design, development and support of existing and new products.
- Understand product vision and business needs to define product requirements and product architectural solutions.
- Develop architectural and design principles to improve performance, capacity, and scalability of product.
- Work with Product Manager in planning and execution of new product releases.
- Consult business management team to clarify objectives and functional requirements for new or modified products.
- Work with domain, product management and product engineering teams in the solution engineering efforts.
- Work with pre-sales and product management teams in solution demonstrations.
- Maintain product roadmap and architectural standards that assure product development projects optimally align with business objectives.
- Define product validation policies so as to deliver products that meet system and project expectations.
- Facilitate the creation, review, and sign off of project deliverables.
- Provide support for production escalations and problem resolution for customers.
- Assist technical team with issues needing technical expertise or complex systems knowledge.
- Develop broad knowledge about current and future product features.
- Analyze market segments and customer base to develop market solutions.
- Define product requirements that address market opportunities.
Skills:
Spring, Spring Microservices, Spring Data, SPRING BOOT & HIBERNATE, Java, JPA Oracle Certified Professional (Desirable), Java SE 8, SQL, AWS, JavaScript, Angular, ElasticSearch
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite

As an SSE, you will play a crucial role in designing, developing, and maintaining our backend systems
that power our platform. The ideal candidate will have a solid background in Python Django and a
strong understanding of databases, caching, and distributed systems.
● Design, develop, and maintain robust, scalable, and high-performance backend systems using
Python Django.
● Collaborate with cross-functional teams to understand business requirements and translate
them into technical solutions.
● Optimize application performance and scalability by implementing caching strategies, load
balancing, and distributed computing techniques.
● Ensure data integrity and security by implementing best practices for data storage, retrieval,
and access control.
● Develop and maintain integrations with external APIs and services to support seamless
interactions with third-party systems.
● Identify and address performance bottlenecks and other system issues to improve overall
system efficiency.
● Write clean, maintainable, and testable code following industry-standard coding practices.
● Conduct code reviews and provide constructive feedback to peers to ensure code quality and
adherence to best practices.
● Mentor junior team members and assist in their professional growth.

As a back-end developer, you’ll work on our back-end app that is built with Ruby on Rails. You’ll help to plan and develop new features, review other people’s code, fix bugs, help in technological and business decisions, and support the internal team.
Responsibilities
- Work in our main app back end built with Ruby on Rails
- Act as an architect of new features
- Review other people's code
- Fix bugs
- Help in decisions of technology and business
- Support internal team
- Refactor code
Requirements
- At least 3 years of industry experience
- At least 1 year of experience working on Ruby on Rails projects
- Comfortable in dealing with server infrastructure and deployment processes
- Experience with continuous delivery and automated testing.
- Experience with refactoring Ruby on Rails applications
- Experience in designing, developing and maintaining APIs
- Can work for at least a couple of hours during Japanese business hours
Nice to haves
These aren’t required, but be sure to mention them in your application if you have them.
- UI/UX design principles
- Good understanding of coding best practices and design patterns
- Understanding of security technologies (encryption, authentication, OAuth 2.0)
- Contribution to open source projects
- Interest and ability to learn other coding languages as needed
- Agile Development experience
Senior Software Engineer (Java)
We are looking for a Senior Software Engineer - Java
If you're a creative problem solver who is eager to develop amazing products and hungry for a new adventure, a word class workplace is waiting for you in the heart of Kolkata.
An exhaustive list of expectations :
✓ Design and implement cutting-edge applications
✓ Participate in code reviews and application debugging and diagnosis.
✓ Practice modern software development methodologies such as continuous delivery, and scrum.
✓ Collaborate with product managers and engineers to build scalable systems enabling innovative ordering experiences.
Requirements -
✓ Knowledge and 5+ years of experience in developing Java applications
✓ A completed technical degree in Computer Science or any related fields.
✓ Profound knowledge and working experience with Java Frameworks (Springboot) and SQL databases.
✓ Solid experience in the design and implementation of Restful APIs and design patterns.
✓ Strong knowledge in Core Java, REST , Spring Framework, Spring Boot Microservice , JPA (e.g. Hibernate, OpenJPA, etc.) , Docker, Jenkins, ELK Stack
✓ Experience working with NoSQL Technologies and interest in Elasticsearch, and Microservices architectures.
✓ Curiosity, out of box problem-solving abilities, and an eye for detail.
✓ Passion for clean code
What really makes us interested in you - You are self-motivated. You think like an entrepreneur, constantly innovating and driving positive change, but more importantly, you consistently deliver stupendous results.
Number of position – 3
Job Location – kolkata, Sector 5
About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
Collaborate with cross-functional teams to define, design and build performant modern web applications and services. Build high quality web applications and services by writing clean and modular code
Must have Skills
- 10+ years of experience in leading teams and delivering product, write unit and integration tests to ensure robustness and reliability of web applications and services.
- Measure and improve performance of microservices.
- Catalyze growth within the team through code reviews and pair programming to maintain high development standards Investigate operational issues to find the root cause and propose improvements Design, build, and maintain APIs, services, and systems across Stripe’s engineering teams.
- Expert level of experience in design and development of Web Applications, highly scalable distributed systems.
- Should have experience in development skills using latest technologies including NodeJS, Microservices, Elastic search, Timescale DB, Kafka, Redis, etc.
- Should have exposure in NoSQL and SQL development.
- Comprehensive knowledge of physical and logical data modelling, performance tuning.
- Should possess excellent communication, presentation, and interpersonal skills.
- Ability to work collaboratively and productively with globally dispersed teams
- Build high performance teams and Coach team for successful career growth
- Experience working with relational and non-relational databases, query optimization, and designing schema
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Requirements
• Extensive and expert programming experience in at least one general programming language (e. g.
Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code.
• Experience with multi-threading and concurrency programming.
• Extensive experience in object oriented design skills, knowledge of design patterns, and a huge passion
and ability to design intuitive modules and class-level interfaces.
• Excellent coding skills - should be able to convert design into code fluently.
• Knowledge of Test Driven Development.
• Good understanding of databases (e. g. MySQL) and NoSQL (e. g. HBase, Elasticsearch, Aerospike etc).
• Strong desire to solve complex and interesting real world problems.
• Experience with full life cycle development in any programming language on a Linux platform.
• Go-getter attitude that reflects in energy and intent behind assigned tasks.
• Worked in a startup-like environment with high levels of ownership and commitment.
• BTech, MTech or Ph. D. in Computer Science or related technical discipline (or equivalent).
• Experience in building highly scalable business applications, which involve implementing large complex
business flows and dealing with huge amounts of data.
• 3+ years of experience in the art of writing code and solving problems on a large scale.
• Open communicator who shares thoughts and opinions frequently, listens intently, and takes
constructive feedback.
Responsibilities:
- Responsible for design and development of high end, robust, scalable products that disrupt the market
- Technically-intense role with primary focus on building cool products in a niche domain
- Evolution into Principal Software Engineers and Team leads and beyond based on one’s ability to demonstrate very strong technical expertise and project & people management skills.
- Reviewing code work for accuracy and functionality
- Analyzing code segments regularly.
Requirements:
Experience Range: 3+
Role: Software Engineer – final role will depend on candidate’s experience and credentials
Education: BE/B. Tech/MCA/M.Sc./MTech
Technology Stack: Java, Hibernate, spring, spring boot, Microservices, MySQL, reactJS, Elastic Search, AWS Infra
Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.
• 4+ years of work experience in test planning and automation of enterprise software
• Expertise in programming using Java or Python and other scripting languages.
• Experience with one or more public clouds is expected.
• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins
. • Experience with performance and scalability testing tools.
• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.
Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira


Requirements
- Extensive and expert programming experience in at least one general programming language (e. g. Java, C, C++) & tech stack to write maintainable, scalable, unit-tested code.
- Experience with multi-threading and concurrency programming.
- Extensive experience in object-oriented design skills, knowledge of design patterns, and a huge passion and ability to design intuitive modules and class-level interfaces.
- Excellent coding skills - should be able to convert the design into code fluently.
- Knowledge of Test Driven Development. Good understanding of databases (e. g. MySQL) and NoSQL (e. g. HBase, Elasticsearch, Aerospike etc).
- Strong desire to solve complex and interesting real-world problems.
- Experience with full life cycle development in any programming language on a Linux platform. Go-getter attitude that reflects in energy and intent behind assigned tasks.
- Worked in a startup-like environment with high levels of ownership and commitment.
- BTech, MTech or Ph. D. in Computer Science or related technical discipline (or equivalent).
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amounts of data.
- 3+ years of experience in the art of writing code and solving problems on a large scale.
- An open communicator who shares thoughts and opinions frequently listens intently and takes constructive feedback
This role is for Work from the office.
Job Description
Roles & Responsibilities
- Work across the entire landscape that spans network, compute, storage, databases, applications, and business domain
- Use the Big Data and AI-driven features of vuSmartMaps to provide solutions that will enable customers to improve the end-user experience for their applications
- Create detailed designs, solutions and validate with internal engineering and customer teams, and establish a good network of relationships with customers and experts
- Understand the application architecture and transaction-level workflow to identify touchpoints and metrics to be monitored and analyzed
- Analytics and analysis of data and provide insights and recommendations
- Constantly stay ahead in communicating with customers. Manage planning and execution of platform implementation at customer sites.
- Work with the product team in developing new features, identifying solution gaps, etc.
- Interest and aptitude in learning new technologies - Big Data, no SQL databases, Elastic Search, Mongo DB, DevOps.
Skills & Experience
- At least 2+ years of experience in IT Infrastructure Management
- Experience in working with large-scale IT infra, including applications, databases, and networks.
- Experience in working with monitoring tools, automation tools
- Hands-on experience in Linux and scripting.
- Knowledge/Experience in the following technologies will be an added plus: ElasticSearch, Kafka, Docker Containers, MongoDB, Big Data, SQL databases, ELK stack, REST APIs, web services, and JMX.