Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
EaseMyTrip.com

at EaseMyTrip.com

1 recruiter
Zainab Siddiqui
Posted by Zainab Siddiqui
Noida
2 - 3 yrs
₹3L - ₹5L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconPython
skill iconNodeJS (Node.js)
skill iconGitHub
+5 more

Key Responsibilities:

  • ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
  • 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
  • 🐧 Administer Linux servers and ensure their security and performance.
  • 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
  • 🔗 Manage source code repositories using GitLab or GitHub.
  • 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
  • 🤝 Collaborate with development teams to support application deployments and maintenance.
  • 🔒 Implement security best practices across cloud and server environments.



Required Skills:

  • ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
  • 🖥️ Strong understanding of Windows Server administration and IIS.
  • 🐧 Proficiency in Linux server management.
  • 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
  • 🔗 Knowledge of version control systems such as GitLab or GitHub.
  • 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
  • 📝 Strong documentation and communication skills.



Preferred Skills:

  • 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
  • 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
  • 🔒 Understanding of networking concepts, firewalls, and security best practices.


Read more
 Cloud Consulting and Engineering Firm

Cloud Consulting and Engineering Firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
5 - 12 yrs
₹0.1L - ₹0.1L / yr
Artificial Intelligence (AI)
Generative AI
skill iconAmazon Web Services (AWS)
Large Language Models (LLM)
skill iconKubernetes
+10 more

Description

Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.


As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.


What you will do...

  • Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
  • You will work to develop delivery estimates and an estimated project plan.
  • You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
  • Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
  • Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
  • Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
  • Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.


What we need...

  • 5+ years of experience working in a Software Engineering capacity
  • Excellent knowledge and experience with Python, and preferably additional languages such as Go
  • Strong critical thinking skills, and a bias towards problem solving
  • Familiarity with implementing microservice architectures
  • Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
  • Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
  • Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
  • Experience deploying production workloads on the public cloud - either GCP or AWS
  • Experience using CI/CD tools such as GitHub Actions, GitLab, etc
  • Able to work with new tools and technologies where you may not have prior experience
  • Comfortable with being on video in meetings internally and with clients
  • Strong English communications skills



We are a fully remote company and offer competitive compensation and benefits.

Read more
EaseMyTrip.com

at EaseMyTrip.com

1 recruiter
Madhu Sharma
Posted by Madhu Sharma
Noida
3 - 7 yrs
₹5L - ₹9L / yr
skill iconPostgreSQL
PL/SQL
PG/SQL
Database performance tuning
DBA
+3 more

Job Title: PostgreSQL Developer / DBA

Expereince- 3-7 Years

Location -Noida


Job Description:

We are looking for a skilled PostgreSQL Developer / DBA with experience in managing databases in both Cloud and On-Premise environments.

The candidate will be responsible for database development, administration, performance tuning, and ensuring high availability and reliability of PostgreSQL databases.


Key Responsibilities:

• Design and develop database schemas, tables, indexes, and views

• Write and optimize complex SQL queries and PL/pgSQL functions

• Perform database performance tuning and query optimization

• Manage backup, recovery, and database maintenance

• Monitor database health and troubleshoot issues

• Manage PostgreSQL databases in Cloud (AWS/Azure/GCP) and On-Premise

environments

• Implement replication, high availability, and security best practices

Required Skills:

• Strong experience with PostgreSQL DBA and Development

• Expertise in SQL and PL/pgSQL

• Experience with database performance tuning

• Knowledge of backup, recovery, and replication

• Experience working with Cloud and On-Prem PostgreSQL environments

• Familiarity with Linux systems



Cloud Skills (Preferred)

• Experience with:

o AWS RDS / Aurora PostgreSQL

o Azure Database for PostgreSQL

o Google Cloud SQL for PostgreSQL

• Knowledge of cloud monitoring and scaling

Read more
Zoop.one

at Zoop.one

1 candid answer
Malavika Kannoth
Posted by Malavika Kannoth
Pune
3 - 5 yrs
₹12L - ₹15L / yr
skill iconGo Programming (Golang)
Google Cloud Platform (GCP)
Apache Kafka
skill iconDocker

We are looking for a Golang Developer to help build and scale our platform. This role involves designing high-performance distributed systems, building robust APIs and contributing to systems that handle real-time workflows at scale. You will own problems end to end, working closely with product, engineering and platform teams to build solutions that are reliable and scalable.


Responsibilities

  • Develop new and enhance existing microservices, libraries, and features that form our B2B KYC platform.
  • Create and document APIs, Queue Contracts to be consumed by other services.
  • Work closely with the Product and Engineering Leads to implement features following best design principles and patterns.
  • Participate in all phases of the development cycle plan, design, implement, review, test, deploy, document, and training.
  • Help junior developers with best practices like TDD etc. and make sure their code meets the standards.
  • Educate them continuously to improve overall team performance and work quality.

Requirements

  • Development experience (3 to 6 years) preferably on GoLang and scripting skills.
  • Bachelors/Masters in Computer Science or equivalent experience.
  • Strong understanding of Computer Science fundamentals, software design principles, algorithms & design patterns.
  • Interest and ability to quickly learn and ramp up on new languages and technologies.
  • Ability to write understandable, reliable and testable code with minimum supervision.
  • Distributed, highly available systems running at large scale.
  • Distributed platforms which use Kafka, Elasticsearch, Cassandra or similar systems.
  • Cloud environments (e. g., Docker, AWS, GCP, Kubernetes etc., ).
  • Asynchronous programming patterns (e. g., GO Routines/Channels, Async Programming).
  • Experience in CI/CD (Continuous Integration & Delivery), AGILE work environments.
  • Ability to troubleshoot and solve issues on distributed systems.

Why join Zoop

  • Work on high-scale, real-time systems that power critical identity and onboarding workflows for fast-growing businesses
  • Take end-to-end ownership of meaningful engineering problems while building clean, scalable, production-grade systems
  • Grow in a strong engineering environment with exposure to distributed systems, cloud-native technologies, and modern architectures


Read more
Koolioai
Aishwaria SterlingJames
Posted by Aishwaria SterlingJames
Remote only
1 - 4 yrs
₹6L - ₹10L / yr
skill iconPython
skill iconReact.js
skill iconFlask
Google Cloud Platform (GCP)

About koolio.ai

Website: www.koolio.ai

koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.

About the Full-Time Position

We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.

Key Responsibilities:

  • Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
  • Design and build efficient, secure, and modular client-side and server-side architecture
  • Develop high-performance web applications with reusable and maintainable code
  • Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
  • Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
  • Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance

Requirements and Skills:

  • Education: Degree in Computer Science or a related field
  • Work Experience: Minimum of 2+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
  • Technical Skills:
  • Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
  • Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
  • Familiarity with NoSQL and PostgreSQL databases
  • Experience working with audio/video processing libraries is a strong plus
  • Soft Skills:
  • Strong problem-solving skills and the ability to think critically about issues and solutions
  • Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
  • Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
  • Keen attention to detail and a passion for delivering high-quality, scalable solutions
  • Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment

Compensation and Benefits:

  • Health Insurance: Comprehensive health coverage provided by the company
  • ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team

Why Join Us?

  • Be a part of a passionate and visionary team at the forefront of audio content creation
  • Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
  • Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
  • Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
  • Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact
Read more
Pendo

at Pendo

3 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
3 - 7 yrs
Upto ₹45L / yr (Varies
)
skill iconVue.js
skill iconReact.js
skill iconGoogle Analytics
skill iconJava
skill iconPython
+6 more

About the Role

Pendo is looking for a Software Engineer to help build and scale the platform that powers our integrations with enterprise systems such as Salesforce, Slack, Segment, and other partner tools. This team develops the services, APIs, data pipelines, and user interfaces that enable customers to seamlessly connect Pendo into their product and data ecosystems.


In this role, you will primarily focus on building scalable backend systems while also contributing to the frontend experiences that allow customers to configure, manage, and monitor integrations. You’ll collaborate closely with product managers, designers, and infrastructure teams to deliver reliable, high-performance capabilities used by millions of users.


What You'll Do

  • Design and build scalable backend services and APIs that power Pendo’s integrations platform.
  • Develop and maintain distributed, event-driven data pipelines that process and sync high volumes of behavioral and product analytics data.
  • Contribute to frontend applications that allow customers to configure, manage, and monitor integrations and data workflows.
  • Lead technical initiatives from design through implementation, testing, and production rollout.
  • Integrate with third-party APIs and enterprise platforms using technologies such as REST, webhooks, and OAuth.
  • Collaborate with product, design, infrastructure, and partner teams to translate business needs into high-quality technical solutions.
  • Use modern development workflows and AI-powered tools to improve developer productivity and streamline engineering processes.
  • Participate in design reviews and promote best practices in testing, observability, performance, and system reliability.
  • Contribute to improving platform scalability, availability, and operational excellence.


What We're Looking For

  • Experience building backend services, APIs, or distributed systems.
  • Experience developing modern web applications using frameworks such as Vue, React, or Angular.
  • Strong proficiency in at least one backend language such as Go, Java, Python, or C++.
  • Experience working with cloud infrastructure such as AWS or GCP.
  • Familiarity with distributed systems, event-driven architectures, or high-throughput data pipelines.
  • Experience writing and maintaining unit, integration, and end-to-end tests.
  • Strong collaboration and communication skills.


Nice to Have

  • Experience building integration platforms or working with third-party APIs.
  • Familiarity with authentication models such as OAuth and enterprise SaaS integrations.
  • Experience working with analytics or behavioral event data.
  • Experience leveraging AI-assisted development tools or working with modern AI workflows.


Technologies We Use

  • Frontend: Vue, Vuex, React, Angular, Highcharts, Jest, Cypress
  • Backend: Go, Java, Python, C++
  • Cloud & Data: AWS, GCP, Redis, Pub/Sub, SQL/NoSQL
  • AI / ML: GenAI, LLMs, LangChain, MLOps
Read more
Virtana

at Virtana

3 candid answers
2 recruiters
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
+5 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
Mumbai
3 - 6 yrs
₹7L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)

Lightning Job By Cutshort ⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Job Overview:


We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.


Kindly apply at https://wohlig.keka.com/careers/jobdetails/54566


Responsibilities:


• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.

• Deploy and manage Kubernetes clusters using AWS.

• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.

• Monitor system performance using Datadog, ELK, and Cloudflare tools.

• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.

• Collaborate with development teams to design, implement and test infrastructure changes.

• Troubleshoot and resolve infrastructure issues as they arise.

• Participate in on-call rotation and provide support for production issues.


Qualifications:

• Bachelor's or Master's degree in Computer Science, Engineering or a related field.

• 3+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.

• Strong understanding of Linux administration and shell scripting.

• Experience with automation tools such as Terraform, Ansible, or similar.

• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.

• Experience with container orchestration platforms such as Kubernetes.

• Familiarity with container technologies such as Docker.

• Experience with cloud providers such as AWS.

• Experience with monitoring tools such as Datadog and ELK.



Skills:

• Strong analytical and problem-solving skills.

• Excellent communication and collaboration skills.

• Ability to work independently or in a team environment.

• Strong attention to detail.

• Ability to learn and apply new technologies quickly.

• Ability to work in a fast-paced and dynamic environment.

• Strong understanding of DevOps principles and methodologies.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
SQL
ETL
Datawarehousing
Data-flow analysis

We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.


Key Responsibilities

  • Collaborate with business users and stakeholders to understand business processes and data requirements
  • Design and implement dimensional data models, including fact and dimension tables
  • Identify, design, and implement data transformation and cleansing logic
  • Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
  • Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
  • Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
  • Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
  • Provide high-level design, research, and effort estimates for data integration initiatives
  • Provide production support for ETL processes to ensure data availability and SLA adherence
  • Analyze and resolve data pipeline and performance issues
  • Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
  • Translate business requirements into well-defined technical data specifications
  • Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
  • Define and document BI usage through use cases, prototypes, testing, and deployment
  • Support and enhance data governance and data quality processes
  • Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
  • Train and support business users, IT analysts, and developers
  • Lead and collaborate with teams spread across multiple locations

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science or a related field, or equivalent work experience
  • 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
  • Strong expertise in data warehousing concepts, tools, and best practices
  • Excellent SQL skills
  • Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
  • Hands-on experience with Google Cloud Platform (GCP) services, including:
  1. BigQuery
  2. Cloud SQL
  3. Cloud Composer (Airflow)
  4. Dataflow
  5. Dataproc
  6. Cloud Functions
  7. Google Cloud Storage (GCS)
  • Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
  • Strong experience integrating data using APIs, XML, JSON, and similar formats
  • In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
  • Solid understanding of SDLC, Agile, and Scrum methodologies
  • Strong problem-solving, multitasking, and organizational skills
  • Experience handling large-scale datasets and database design
  • Strong verbal and written communication skills
  • Experience leading teams across multiple locations

Good to Have

  • Experience with SSRS and SSIS
  • Exposure to AWS and/or Azure cloud platforms
  • Experience working with enterprise BI and analytics tools

Why Join Us

  • Opportunity to work on large-scale, enterprise data platforms
  • Exposure to modern cloud-native data engineering technologies
  • Collaborative environment with strong stakeholder interaction
  • Career growth and leadership opportunities
Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore), Mumbai
10 - 16 yrs
₹75L - ₹130L / yr
Distributed Systems
Microservices
Enterprise architecture
System Design & Architecture
Event-Driven Architecture
+29 more

🚨 We’re Building a “Top 1% Engineering Org”


We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.


This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments


🚀 The Mandate

Own and evolve the technical backbone of an AI-first enterprise platform.


You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.


🧩 What You’ll Do

  • Architect large-scale distributed systems powering AI-driven workflows
  • Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
  • Redesign legacy systems into scalable, modular, AI-native architectures
  • Drive system design excellence across teams (APIs, infra, observability, reliability)
  • Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
  • Mentor senior engineers and influence engineering culture/org standards
  • Partner with product, data, and leadership on long-term technical strategy


🧠 What We’re Looking For

  • Proven track record building high-scale backend or platform systems
  • Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
  • Strong exposure to data systems/infra / Data / real-time architectures
  • Experience or strong interest in LLMs, GenAI, or AI system design
  • Exceptional system design, abstraction, and problem-solving ability
  • High ownership mindset — you think in terms of systems, not tickets
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset
  • Solve hard system problems (latency, scale, reliability)
  • Drive cross-team technical decisions and standards
  • Mentor senior engineers and influence org-wide architecture 
  • Design large-scale distributed systems and backend platforms
  • Mentorship & Technical Leadership 
  • Expertise in system design, scalability, and performance optimization


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems
Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹20L - ₹25L / yr
TypeScript
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconMongoDB
RESTful APIs
+20 more

Job Details

Job Title: Full Stack Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-7 years

- Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js

 

Criteria

Candidate should have at least 4+ years of professional experience as a Full Stack Engineer

Hands-on experience with both React.js and Node.js

Solid understanding of MongoDB

Should have experience in RESTful APIs

Should be from a startup or scale up companies

Should have good experience in Typescript

Strong understanding of asynchronous programming patterns

Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.

You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.

 

What You’ll Own

1. Full Stack Development

● Design, develop, test, and deploy robust and scalable web applications.

● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.

● Contribute to frontend feature development and integration.

● Participate in feature planning, estimation, and execution.

 

2. Backend & API Engineering

● Design and develop RESTful APIs and backend services.

● Implement asynchronous workflows and scalable microservice architectures.

● Ensure performance, reliability, and security of backend systems.

● Implement authentication, authorization, and data protection best practices.

 

3. Database Design & Optimization

● Design and manage MongoDB schemas using Mongoose.

● Optimize queries and database performance for scale.

● Ensure data integrity and efficient data access patterns.

 

4. Frontend Collaboration & Integration

● Collaborate with frontend developers to integrate React components and APIs seamlessly.

● Ensure responsive, high-performing application behavior.

 

5. System Design & Scalability

● Contribute to system architecture and technical design discussions.

● Design scalable, maintainable, and future-ready solutions.

● Optimize applications for speed and scalability.

 

6. Product & Cross-Functional Collaboration

● Work closely with product and design teams to deliver high-quality features in rapid iterations.

● Participate in the full development lifecycle—from concept to deployment and maintenance.

 

7. Code Quality & Best Practices

● Write clean, testable, and maintainable code.

● Follow Git-based version control and code review best practices.

● Contribute to improving engineering standards and workflows.

 

What We’re Looking For

Must-Haves

● 4+ years of professional experience as a Full Stack Engineer or similar role.

● Strong proficiency in JavaScript and TypeScript.

● Hands-on experience with Node.js and Express.js.

● Solid understanding of MongoDB and Mongoose.

● Experience building and consuming RESTful APIs and microservices.

● Strong understanding of asynchronous programming patterns.

● Good grasp of system design principles and application architecture.

● Experience with Git and version control best practices.

● Bachelor’s degree in Computer Science, Engineering, or a related field.

 

Good-to-Have / Preferred

● Frontend development experience with React.js.

● Exposure to Three.js or similar 3D/visualization libraries.

● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).

● Knowledge of Docker and containerization workflows.

● Experience with testing frameworks (Jest, Mocha, etc.).

● Familiarity with CI/CD pipelines and automated deployments.

 

Tools You’ll Use

● Backend: Node.js, Express.js, TypeScript

● Frontend: React.js (preferred)

● Database: MongoDB, Mongoose

● Version Control: Git, GitHub / GitLab

● Cloud & DevOps: AWS / GCP / Azure, Docker

● Collaboration: Google Workspace, Notion, Slack

 

Key Metrics You’ll Own

● Code quality, performance, and scalability

● Timely delivery of features and releases

● System reliability and reduction in production issues

● Contribution to architectural improvements

 

Why company

● Work on impactful, product-driven tech platforms.

● High-ownership role with end-to-end engineering exposure.

● Opportunity to work with modern technologies and evolving architectures.

● Collaborative startup culture with strong learning and growth opportunities.

 

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Trivandrum, Bengaluru (Bangalore)
3 - 6 yrs
Upto ₹30L / yr (Varies
)
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconKubernetes
skill iconGitHub
+2 more

Role & Responsibilities

  • Develop and deliver automation software to build and improve platform functionality
  • Ensure reliability, availability, and manageability of applications and cloud platforms
  • Champion adoption of Infrastructure as Code (IaC) practices
  • Design and build self-service, self-healing, monitoring, and alerting platforms
  • Automate development and testing workflows through CI/CD pipelines (Git, Jenkins, SonarQube, Artifactory, Docker containers)
  • Build and manage container hosting platforms using Kubernetes

Requirements

  • Strong experience deploying and maintaining GCP cloud infrastructure
  • Well-versed in service-oriented and cloud-based architecture design patterns
  • Knowledge of cloud services including compute, storage, networking, messaging, and automation tools (e.g., CloudFormation/Terraform equivalents)
  • Experience with relational and NoSQL databases (Postgres, Cassandra)
  • Hands-on experience with automation/configuration tools (Puppet, Chef, Ansible, Terraform)

Additional Skills

  • Strong Linux system administration and troubleshooting skills
  • Programming/scripting exposure (Bash, Python, Core Java, or Scala)
  • CI/CD pipeline experience (Jenkins, Git, Maven, etc.)
  • Experience integrating solutions in multi-region environments
  • Familiarity with Agile/Scrum/DevOps methodologies


Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
wwwwebnyayai
Noida
4 - 8 yrs
₹6L - ₹30L / yr
Google Cloud Platform (GCP)
Artificial Intelligence (AI)
skill iconPython
skill iconDjango
Apache Kafka

We are looking to recruit an expert for backend software development at Webnyay. We are an enterprise SaaS startup catering to India and international markets. We are now growing fast and need a rockstar senior software developer who is an expert in Python/Django and GCP.


What we are looking for:

  • At least 6 years of professional software development experience.
  • At least 4 years of experience with Python & Django.
  • Proficiency in Natural Language Processing (tokenization, stopword removal, lemmatization, embeddings, etc.)
  • Experience in computer vision fundamentals, particularly object detection concepts and architectures (e.g., YOLO, Faster R-CNN)
  • Experience in search and retrieval systems and related concepts like ranking models, vector search, or semantic search techniques
  • Experience with multiple databases (relational and non-relational).
  • Experience with hosting on GCP and other cloud services.
  • Familiar with continuous integration and other automation.
  • Focus on code quality and writing scalable code.
  • Ability to learn and adopt new technologies depending on business requirements.
  • Prior startup experience will be a plus!


Some of your responsibilities would include:

  • Work closely in a highly AGILE environment with a team of engineers.
  • Create and maintain technical documentation of technical design and solution.
  • Build products/features that are highly scalable, secure, highly available, high performing and cost-effective.
  • Help team in debugging.
  • Perform code reviews.
  • Understand the full feature set/ implementation and architecture of the applications.
  • Analyze business goals and product requirements and contribute to application architecture design, development and delivery.
  • Provide technical expertise for every phase of the project lifecycle; from concept development to solution design, implementation, optimization and support.
  • Act as an Interface with business teams to understand and create technical specifications for workable solutions within the project.
  • Explore and work with LLM APIs and Generative AI.
  • Make performance-related recommendations, identify and eliminate performance bottlenecks (hardware, software, configuration); drive performance tuning, re-design and re-factoring.
  • Participate in the software development lifecycle, which includes research, new development, modification, security, reuse, re-engineering and maintenance of common component libraries.
  • Participate in product definition and feature prioritization.
  • Collaborate with internal teams and stakeholders across business verticals.


Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Remote only
6 - 11 yrs
₹30L - ₹45L / yr
Google Cloud Platform (GCP)
SQL
skill iconPython

Highlights - Current location of candidate should be Bangalore

Total Exp - 6-12yrs

Joining Time period - Within 30 days

GCP Bigquery expert, GCP Certified


About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.

 

Our Values 

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.

 

Equal Opportunity Statement 

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


Job Summary

We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.


Key Responsibilities

ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.

Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.

Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.

Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards. 

API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.

Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.

Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.

Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.


Qualifications and Skills

Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.

Experience: 6+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.

Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.

Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.


Core Competencies:

Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)

Must Have - GCP Certification

Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)

Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling

Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).

Experience with data validation techniques and tools.

Familiarity with CI/CD practices and the ability to work in an Agile framework.

Strong problem-solving skills and keen attention to detail.

Read more
Ampera Technologies
Faisal AshrafNomani
Posted by Faisal AshrafNomani
Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
AWS
Windows Azure
Google Cloud Platform (GCP)
Large Language Models (LLM)
AI Agents
+2 more

Job Description:

 

We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems. 

This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production. 

You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability

 

Key Responsibilities

AI & Agentic Infrastructure 

  • Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows. 
  • Build scalable runtime environments for LLM orchestration frameworks. 
  • Enable deployment of AI copilots, assistants, and autonomous decision systems. 

Common frameworks may include: 

  • LangChain 
  • LlamaIndex 
  • AutoGPT 

 

LLMOps & AI Model Lifecycle 

Design and manage LLMOps pipelines for the full lifecycle of large language models: 

  • Model deployment 
  • Prompt management 
  • Versioning 
  • Evaluation and testing 
  • Model monitoring 

Integrate with AI platforms such as: 

  • Azure Machine Learning 
  • Amazon SageMaker 
  • Vertex AI 

 

Retrieval-Augmented Generation (RAG) Infrastructure 

Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs. 

Responsibilities include: 

  • Document ingestion pipelines 
  • Embedding generation workflows 
  • Knowledge indexing 
  • Query orchestration 
  • Retrieval optimization 
  • Support scalable semantic search architectures. 

 

Vector Database & Knowledge Infrastructure 

Deploy and manage vector databases used for AI applications and semantic retrieval. 

Common technologies include: 

  • Pinecone 
  • Weaviate 
  • Milvus 
  • FAISS 

Responsibilities include: 

  • Index optimization 
  • Query latency tuning 
  • Scalable embedding storage 
  • Hybrid search architecture 

 

Multi-Cloud AI Infrastructure 

Design and maintain AI-ready infrastructure across: 

  • Amazon Web Services 
  • Microsoft Azure 
  • Google Cloud Platform 

Key responsibilities include: 

  • GPU infrastructure management 
  • Distributed training environments 
  • Hybrid cloud integrations with on-prem data centers 
  • Infrastructure scaling for AI workloads 

 

Data Platforms & Integration 

  • Support deployment and optimization of data lakes, data warehouses, and streaming platforms. 
  • Work with data engineering teams to ensure secure and scalable data infrastructure. 

 

Cloud Architecture & Infrastructure 

  • Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud. 
  • Build hybrid cloud architectures integrating on-premise environments with cloud platforms. 
  • Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads. 

 

DevOps, Platform Engineering & Automation 

Build automated cloud infrastructure using modern DevOps practices. 

Tools may include: 

  • Terraform 
  • Docker 
  • Kubernetes 
  • GitHub Actions 

Responsibilities include: 

  • Infrastructure as Code (IaC) 
  • Automated deployments 
  • CI/CD pipelines for AI models and services 
  • Platform reliability and scalability 

 

AI Observability & Monitoring 

Implement observability frameworks to monitor AI systems in production. 

This includes: 

  • Model performance monitoring 
  • Prompt evaluation 
  • Hallucination detection 
  • Latency and throughput analysis 
  • Cost monitoring for LLM usage 

Tools may include: 

  • Arize AI 
  • WhyLabs 
  • Weights & Biases 

 

Security, Governance & Responsible AI 

Ensure AI systems follow strong governance and security practices. 

Responsibilities include: 

  • Data privacy and compliance 
  • Model governance frameworks 
  • Secure model deployment 
  • Monitoring model bias and drift 
  • AI risk management 

Support enterprise frameworks for Responsible AI and AI compliance. 

 

Data & Security 

  • Experience with data lake architectures, distributed storage, and ETL pipelines 
  • Knowledge of data security, encryption, IAM, and compliance frameworks 
  • Familiarity with AI governance and responsible AI practices 

 

 

Required Skills 

Cloud & Infrastructure 

  • Strong experience in Azure (must have), AWS or GCP 
  • Hybrid and multi-cloud architecture 
  • GPU infrastructure management 

DevOps & Automation 

  • Kubernetes 
  • Docker 
  • Terraform 
  • CI/CD pipelines 

AI / ML Platforms 

  • MLOps pipelines 
  • Model deployment 
  • Model monitoring 

AI Application Infrastructure 

  • Vector databases 
  • RAG pipelines 
  • LLM orchestration frameworks 

Programming 

Experience in one or more languages: 

  • Python 
  • Go 
  • Java 
  • TypeScript 

 

 

 

Preferred Qualifications 

  • Experience building AI copilots or autonomous agents 
  • Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training 
  • Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability 
  • Experience building enterprise AI platforms 

 

Education & Experience 

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 
  • 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering 
  • Experience working in data-driven or AI-focused environments 

 

 

What Success Looks Like 

  • Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms 
  • Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products 
  • Secure and scalable AI-ready cloud platforms 
  • Strong automation and governance across cloud and AI systems 


Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
7 - 10 yrs
₹15L - ₹23L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
DevOps
skill iconKubernetes

Job Overview 


We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.


Key Responsibilities 


  • Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
  • Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
  • Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
  • Provide guidance on selecting the appropriate cloud services for various workloads.
  • Design, implement, and optimize CI/CD pipelines to streamline software delivery.
  • Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
  • Collaborate with development and operations teams to enhance the overall DevOps culture.
  • Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
  • Architect and optimize Kubernetes clusters for high availability and scalability.
  • Engage in research and development activities to stay abreast of industry trends and emerging technologies.
  • Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
  • Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
  • Ensure high-performance levels and reliability in production environments.
  • Design scalable and high-performance database architectures tailored to meet business needs.
  • Execute database migrations with a keen focus on data consistency, integrity, and performance.
  • Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
  • Optimize database workflows to enhance efficiency and reliability.
  • Work closely with clients to assess and enhance the quality of existing architectures.
  • Implement best practices to ensure robust, secure, and well-architected solutions.
  • Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
  • Provide technical leadership and mentorship to junior team members.


Required Skills and Qualifications: 


  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Relevant industry experience in a Solution Architect role.
  • Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
  • Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
  • In-depth knowledge of Kubernetes and container orchestration.
  • Strong background in scaling architectures to handle significant workloads.
  • Sound knowledge in database migrations
  • Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.


Read more
Remote only
0 - 8 yrs
₹3L - ₹12L / yr
Linux/Unix
Google Cloud Platform (GCP)
DevOps
CI/CD
skill iconDocker
+8 more

We are looking for a passionate and detail-oriented Site Reliability Engineer (SRE) to ensure the reliability, scalability, and performance of our production systems. This role is open to freshers as well as experienced professionals who are eager to work on cloud infrastructure, automation, and system monitoring.


Key Responsibilities:

1. Monitor system performance, availability, and reliability.

2. Automate deployment, scaling, and infrastructure management processes.

3. Troubleshoot production issues and perform root cause analysis.

4. Improve system reliability through automation and performance tuning.

5. Implement CI/CD pipelines and DevOps best practices.

6. Maintain documentation for infrastructure and processes.

7. Collaborate with development and operations teams.

8. Ensure security, backup, and disaster recovery strategies are in place.


Required Skills:

1. Basic understanding of Linux/Unix systems.

2. Knowledge of cloud platforms (AWS / Azure / GCP).

3. Understanding of DevOps concepts and CI/CD pipelines.

4. Familiarity with Docker and Kubernetes (basic knowledge for freshers).

5. Scripting knowledge (Python / Bash / Shell).

6. Basic networking knowledge (DNS, HTTP, Load Balancing).

7. Knowledge of monitoring tools (Prometheus, Grafana, etc.).

8. Strong analytical and problem-solving skills.


Preferred Skills (Good to Have):

1. Experience with Infrastructure as Code (Terraform / Ansible).

2. Understanding of microservices architecture.

3. Experience with version control tools (Git).


Eligibility:

1. B.E / B.Tech / B.Sc / M.Tech / MCA or related field.

2. Freshers with strong DevOps interest are welcome.

3. 0–8 years of relevant experience.


Location: Remote / Chennai

Employment Type: Full-Time 


Apply here: https://connectsblue.com/jobs/741/site-reliability-engineer-sre-at-bluepms-software-solutions-pvt-ltd

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Mumbai, Navi Mumbai
10 - 15 yrs
₹55L - ₹80L / yr
Distributed Systems
Systems design
Systems architecture
High-level design
LLD
+77 more

Location: Mumbai, Maharashtra, India

Sector: Technology, Information & Media

Company Size: 500 - 1,000 Employees

Employment: Full-Time, Permanent

Experience: 10 - 14 Years (Engineering Leadership)

Level: Engineering Manager / Group EM


ABOUT THIS MANDATE :


Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.


This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.


We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.


THE OPPORTUNITY AT A GLANCE :


AI-First Engineering Culture :

  • Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.


Hands-On Engineering Leadership :

  • Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.


People & Org Builder :

  • Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.


KEY RESPONSIBILITIES :


1. Hands-On Technical Engagement :

  • Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
  • Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
  • Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
  • Identify and resolve systemic technical debt and architectural risks across team-owned services
  • Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
  • Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability


2. AI Adoption, Integration & Transformation (2026 Mandate) :

  • Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
  • Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
  • Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
  • Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
  • Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
  • Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
  • Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
  • Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications


3. People Leadership & Team Development :

  • Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
  • Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
  • Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
  • Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
  • Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
  • Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
  • Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering


4. Engineering Delivery & Execution Excellence :

  • Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
  • Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
  • Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
  • Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
  • Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
  • Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership


5. Strategic Leadership & Cross-Functional Influence :

  • Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
  • Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
  • Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
  • Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
  • Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
  • Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use


AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :


In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :


AI Developer Productivity

  • Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.


LLM & GenAI Product Features

  • Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.


AI-Augmented Observability

  • Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.


Team AI Fluency :

  • Build mandatory AI literacy across all engineering levels.
  • Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.


Responsible AI Governance :

  • Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.


TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :


  • Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
  • Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
  • AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
  • Copilot: Cursor /Hugging Face
  • Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
  • Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
  • Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
  • MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
  • Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
  • CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)


QUALIFICATIONS & CANDIDATE PROFILE :

Education :

  • B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
  • Demonstrated engineering depth and leadership impact may complement institution pedigree


Experience :

  • 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
  • Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
  • Hands-on backend engineering background must be able to read, write, and critique production code
  • Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
  • Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
  • Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
  • Demonstrated career stability minimum of 2 years of average tenure per organisation.


The Ideal Engineering Manager in 2026 :

  • Leads with context, not control, empowers engineers while maintaining accountability and quality
  • Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
  • Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
  • Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
  • Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
  • Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
  • Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites


WHY THIS ROLE STANDS APART :


AI Transformation at Scale :

  • Lead one of the most significant AI adoption programmes in India's digital media sector.
  • Our decisions will shape how hundreds of engineers work in 2026 and beyond.


Hands-On & Strategic Balance :

  • A rare EM role that actively encourages technical depth.
  • Stay close to the code while owning the people agenda - the best of both worlds.


Established Platform, Real Scale :

  • 5001,000 engineers, proven product-market fit, and the org maturity to execute.
  • This is not a greenfield startup gamble it is a serious company with serious ambition.


Clear Leadership Growth Path :

  • A visible, direct path toward Director / VP of Engineering.
  • Senior leadership is invested in growing its next generation of technology executives.


Read more
House Of Shipping
Sanikha M
Posted by Sanikha M
Chennai
3 - 8 yrs
₹8L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
API
+1 more

Key Responsibilities

  • Design, develop, and maintain microservices and APIs running on GKE, Cloud Run, App Engine, and Cloud Functions.
  • Build secure, scalable REST and GraphQL APIs to support Our Client front-end applications and integrations.
  • Work with the GCP Architect to ensure back-end design aligns with enterprise architecture and security best practices.
  • Implement integration layers between GCP-hosted services, AlloyDB, Cloud Spanner, Cloud SQL, and third-party APIs.
  • Deploy services using Gemini Code Assist, CLI tools, and Git-based CI/CD pipelines.
  • Optimize service performance, scalability, and cost efficiency.
  • Implement authentication, authorization, and role-based access control using GCP Identity Platform / IAM.
  • Work with AI/ML services (e.g., Vertex AI, Document AI, NLP APIs) to enable intelligent back-end capabilities.
  • Collaborate with front-end developers to design efficient data contracts and API payloads.
  • Participate in code reviews and enforce clean, maintainable coding standards.

Experience & Qualifications

  • 6–8 years of back-end development experience, with at least 3+ years in senior/lead analyst roles.
  • Proficiency in one or more back-end programming languages: Node.js, Python, or Java.
  • Strong experience with GCP microservices deployments on GKE, App Engine, Cloud Run, and Cloud Functions.
  • Deep knowledge of AlloyDB, Cloud Spanner, and Cloud SQL for schema design and query optimization.
  • Experience in API development (REST/GraphQL) and integration best practices.
  • Familiarity with Gemini Code Assist for code generation and CLI-based deployments.
  • Understanding of Git-based CI/CD workflows and DevOps practices.
  • Experience integrating AI tools into back-end workflows.
  • Strong understanding of cloud security and compliance requirements.
  • Excellent communication skills for working in a distributed/global team environment.


Read more
AI GTM Platform for Faster B2B Pipeline Growth

AI GTM Platform for Faster B2B Pipeline Growth

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
4 - 10 yrs
₹74L - ₹130L / yr
Artificial Intelligence (AI)
skill iconScala
skill iconPython
AI Agents
API
+9 more

Senior BackEnd Engineer


The ideal candidate will have a strong background in building scalable applications, a deep understanding of back-end technologies, and experience with cloud infrastructure. As a Back End Engineer, you will be responsible for designing, developing, and maintaining a scalable workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in Scala, Python, AI Agents/APIs, and GCP will be crucial in ensuring our system is reliable, performant, and scalable.


Key Responsibilities:

Back-End Development:

  • Build and maintain back-end services and APIs using Scala.
  • Implement and optimize Orchestration workflow system involving database queries and operations.
  • Build API integrations with Third Party APIs and services.
  • Ensure robust and scalable server-side logic.


Cloud Integration:

  • Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
  • Utilize GCP services to enhance application performance and scalability.
  • Implement cloud-based solutions for data storage, processing, and analytics.


Collaboration And Communication:

  • Work closely with cross-functional teams to define, design, and ship new features.
  • Participate in code reviews and contribute to sharing team knowledge.
  • Document development processes, coding standards, and project requirements.


Qualifications:

  • Educational Background:
  • Completed a masters/bachelor degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Proficiency in Scala programming language.
  • Strong experience with React and ReactJS.
  • Familiarity with Google Cloud Platform (GCP) and its services.
  • Knowledge of front-end development tools and best practices.
  • Understanding of RESTful API design and implementation.
  • Soft Skills:
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
  • Eagerness to learn and adapt to new technologies and challenges.


Preferred Qualifications:

  • Experience with version control systems such as Git.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Understanding of workflow management systems and their requirements.
  • Experience with containerization technologies like Docker.

 

Must have Skills

  • Scala - 4 Years
  • React.Js - 1 Years
  • RESTful API - 4 Years
  • Docker - 2 Years
  • Python - 3 Years
  • Artificial Intelligence - 2 Years

 

Read more
AI GTM Platform for Faster B2B Pipeline Growth

AI GTM Platform for Faster B2B Pipeline Growth

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
4 - 10 yrs
₹75L - ₹120L / yr
skill iconReact.js
skill iconJavascript
RESTful APIs
API
ReAct (Reason + Act)
+7 more

Senior FrontEnd Software Engineer

The ideal candidate will have a strong background in building scalable web applications, a deep understanding of both front-end technologies, and experience with cloud infrastructure. As a Front-end Engineer, you will be responsible for designing, developing, and maintaining a workflow management system. You will work closely with cross-functional teams to build robust and efficient applications that meet the needs of our users. Your expertise in ReactJS, MUI and API Integrations with the backend will be crucial in ensuring our system is intuitive, user-friendly, reliable and performant.

Key Responsibilities:

Develop and Maintain Front-End Components:

  • Design, develop, and optimize user interfaces using React and ReactJS.
  • Ensure a seamless and responsive user experience.
  • Collaborate with the design team to implement best practices in UI/UX design. Cloud Integration:
  • Deploy, manage, and monitor applications on Google Cloud Platform (GCP).
  • Utilize GCP services to enhance application performance and scalability.
  • Implement cloud-based solutions for data storage, processing, and analytics. Collaboration and Communication:
  • Work closely with cross-functional teams to define, design, and ship new features.
  • Participate in code reviews and contribute to sharing team knowledge.
  • Document development processes, coding standards, and project requirements.


Qualifications:

  • Educational Background:
  • Completed a master's/bachelor's degree in Computer Science, Engineering, or a related field.
  • Technical Skills:
  • Proficiency in JavaScript.
  • Strong experience with React, ReactJS and MUI.
  • Familiarity with Google Cloud Platform (GCP) and its services.
  • Knowledge of front-end development tools and best practices.
  • Understanding of RESTful API design and implementation.
  • Soft Skills:
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
  • Eagerness to learn and adapt to new technologies and challenges.


Preferred Qualifications:

  • Experience with version control systems such as Git.
  • Familiarity with CI/CD pipelines and DevOps practices.
  • Understanding of workflow management systems and their requirements.
  • Experience with containerization technologies like Docker.

 

Must have Skills

  • React.Js - 4 Years
  • JavaScript - 4 Years
  • RESTful API - 1 Years
  • Material UI - 3 Years

 

Read more
Techjays

at Techjays

1 candid answer
SREEHARIVASU S
Posted by SREEHARIVASU S
Remote only
5 - 10 yrs
₹30L - ₹50L / yr
Design patterns
Data Structures
Relational Database (RDBMS)
skill iconGit
Linux/Unix
+3 more

What makes Techjays an inspiring place to work

At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.

We operate as part of the 1% of companies globally that can truly leverage AI the right way and  not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.

Our strength lies in combining deep backend engineering with AI system design, building AI-native platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready.

Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.

We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.

You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.


Years of Experience: 5 - 8 years


Location: Remote/ Coimbatore


Key Skills:

  • Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets
  • Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions
  • Databases (Advanced): PostgreSQL, MySQL, MongoDB
  • AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others)
  • Tools (Expert): Git, Docker, Linux

Roles and Responsibilities:

  • Design, development, and implementation of highly scalable and secure backend services using Python and Django.
  • Architect and develop complex features for our AI-powered platforms
  • Write clean, maintainable, and well-tested code, adhering to best practices and coding standards.
  • Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software.
  • Mentor junior developers and provide technical guidance.

What We’re Looking For Beyond Skills

  • Builder mindset — you think in systems, not just tickets
  • Ownership — you take features from idea to production
  • Structured thinking in ambiguous environments
  • Clear communication and collaborative approach
  • Ability to work in a fast-paced, evolving startup environment


What We Offer

  • Competitive compensation
  • Flexible work environment (Remote / Coimbatore office)
  • Paid holidays & flexible time off
  • Medical insurance (Self & Family up to ₹4 Lakhs per person)
  • Opportunity to work on production-grade AI systems
  • Exposure to global clients and high-impact projects
  • A culture that values clarity, integrity, and continuous growth

If you want to build AI-native systems that are used in the real world,  not just prototypes, Techjays is the place to do it.

Survey Form Link


Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
Chennai
5 - 8 yrs
₹5L - ₹10L / yr
Google Cloud Platform (GCP)
CI/CD
FOSSA
Terraform

Role Summary

We are looking for a skilled DevSecOps Engineer to design, implement, and secure scalable CI/CD pipelines and cloud infrastructure on Google Cloud Platform. The role focuses on secure application delivery using Cloud Run, GKE, Terraform, and integrated DevSecOps practices to ensure compliance, reliability, and performance.

Key Responsibilities

  • Design and manage secure CI/CD pipelines using Cloud Build, Jenkins, or Tekton
  • Provision and manage GCP infrastructure using Terraform (IaC)
  • Deploy and manage containerized applications on Cloud Run and GKE
  • Implement container security, vulnerability scanning, SAST/DAST, and dependency scanning
  • Enforce IAM, VPC, and cloud security best practices
  • Monitor, log, and troubleshoot environments for performance and reliability
  • Enable development teams with DevSecOps frameworks and governance standards

Relevant Skills

  • Cloud: Google Cloud Platform (GKE, Cloud Run, IAM, VPC, Cloud Build, Artifact Registry)
  • CI/CD Tools: Jenkins, Tekton, Cloud Build
  • Infrastructure as Code: Terraform
  • Containers & Orchestration: Docker, Kubernetes (GKE)
  • Security Tools: Checkmarx (SAST/DAST), FOSSA, container vulnerability scanning tools
  • Monitoring & Observability: GCP Operations Suite (Cloud Monitoring & Logging)
  • Version Control: Git, branch and release management strategies
  • Other: DevSecOps practices, compliance automation, release orchestration


Read more
USA Based IT Company

USA Based IT Company

Agency job
Bengaluru (Bangalore)
3 - 11 yrs
₹13L - ₹15L / yr
Google Cloud Platform (GCP)
azure
M365
MICROSOFT 365
ACCOUNT MANAGER
+9 more

Job Title: Account Manager – USA Market

Experience: 3–8 Years


Department: Sales

US Market experience mandatory


Any gender


Shift timing: 6 PM to 3 AM


Max CTC: 15 LPA(both position)


Inhouse desk job


Individual Contributor role


B2B SaaS Company Experience Mandatory


5 days of working from the office.



Role Overview

We are seeking a results-driven Account Manager to manage and grow client relationships in the

U.S. market, with a strong preference for candidates with experience in cloud migration services.

The ideal candidate will have a proven track record in B2B sales, account growth, and consultative selling within IT services or cloud solutions.

You will be responsible for managing existing accounts, identifying expansion opportunities, driving

revenue growth, and positioning cloud migration solutions (M365/Azure/GCP) that align with client

business objectives.


Key Responsibilities

 Manage and grow assigned accounts within the USA market.

 Act as the primary point of contact for client stakeholders.

 Identify upsell and cross-sell opportunities, particularly in:

o Cloud migration & modernization

o Infrastructure transformation

o Managed cloud services

 Drive end-to-end sales cycles from requirement gathering to deal closure.

 Collaborate with pre-sales, cloud architects, and delivery teams to craft tailored cloud migration solutions.

 Build long-term relationships with CXOs, IT Directors, and decision-makers.

 Prepare account plans, revenue forecasts, and pipeline reports.

 Meet and exceed quarterly and annual revenue targets.

 Negotiate commercial terms and manage contract renewals.

 Stay up to date with cloud trends, competitive landscape, and US market dynamics.


Required Qualifications

 3–8 years of experience in: Account management / IT services sales / technology consulting

o Handling USA clients (mandatory)

 Proven experience selling cloud solutions and/or IT services.

 Understanding of:

o M365, Azure, or Google Cloud platforms

o Cloud migration strategies (lift & shift, re-platform, re-architect)

o Application modernization & ; infrastructure services

 Strong consultative selling and negotiation skills.

 Experience managing multi-million-dollar accounts (preferred).

 Excellent communication and presentation skills.

 Ability to work in US time zones as required.

Read more
Flipr
Arsalan Mobin
Posted by Arsalan Mobin
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹13L / yr
VAPT
Web application security
Cyber Security
DevSecOps
CI/CD
+13 more

About the role:

We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our

applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.


The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.


Required Skills & Experience:

● 3 to 6 years of solid hands-on experience in the VAPT domain

● Solid understanding of Web, Android, and iOS application security

● Experience with DevSecOps tools and integrating security into CI/CD

● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models

● Familiarity with bug bounty programs and responsible disclosure practices

● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc

● Good knowledge of API security

● Scripting experience (Python, Bash, or similar) for automation tasks

Preferred Qualifications:

● OSCP, CEH, AWS Security Specialty, or similar certifications

● Experience working in a regulated environment (e.g., FinTech, InsurTech)


Responsibilities:

● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,

Android, iOS, and API endpoints

● Perform Threat Modelling & anticipate potential attack vectors and improve security

architecture on complex or cross-functional components

● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities

● Conduct secure code reviews and red team assessments

● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines

● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.

● Maintain and manage vulnerability scanning infrastructure

● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis

on container security, particularly for Docker and Kubernetes.

● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring

● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines

● Triage bug bounty reports and coordinate remediation with engineering teams

● Act as the primary responder for external security disclosures

● Maintain documentation and metrics related to bug bounty and penetration testing

activities

● Collaborate with developers and architects to ensure secure design decisions

● Lead security design reviews for new features and products

● Provide actionable risk assessments and mitigation plans to stakeholders

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Mumbai, Pune
7 - 13 yrs
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
RESTful APIs
skill iconAmazon Web Services (AWS)
+1 more

JOB DESCRIPTION:


Location: Pune, Mumbai

Mode of Work : 3 days from Office


DSA(Collections, Hash maps, trees, Linkedlist and Arrays, etc), Core Oops concepts(Multithreading, Multi Processing, Polymorphism, Inheritance etc) Annotations in Spring and Spring boot, Java 8 Vital features, database Optimization, Microsevices and Rest API

  • Design, develop, and maintain low-latency, high-performance enterprise applications using Core Java (Java 5.0 and above).
  • Implement and integrate APIs using Spring Framework and Apache CXF.
  • Build microservices-based architecture for scalable and distributed systems.
  • Collaborate with cross-functional teams for high/low-level design, development, and deployment of software solutions.
  • Optimize performance through efficient multithreading, memory management, and algorithm design.
  • Ensure best coding practices, conduct code reviews, and perform unit/integration testing.
  • Work with RDBMS (preferably Sybase) for backend data integration.
  • Analyze complex business problems and deliver innovative technology solutions in the financial/trading domain.
  • Work in Unix/Linux environments for deployment and troubleshooting.
Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
8 - 15 yrs
Best in industry
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
TypeScript
skill iconAmazon Web Services (AWS)
+6 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.


Roles and Responsibilities:

● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers

● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies

● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals

● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration

● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans

● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement

● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.


Requirements:

● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role

● Proven experience in architecting and building web and mobile applications at scale

● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks

● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices

● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams

● Excellent problem-solving, communication, and organizational skills

● Nice to have:

  • Prior experience in working with startups or product-based companies
  • Experience mentoring tech leads and helping shape engineering culture
  • Exposure to AI/ML, data engineering, or platform thinking


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethics and culture.



If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 5 yrs
Best in industry
Data Structures
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
Scikit-Learn
+3 more

About NonStop io Technologies

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.


Responsibilities

● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI

● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.

● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data

● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics

● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics

● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems

● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes

● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions

● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.


Qualifications & Skills

● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus

● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects

● Proficiency in programming languages commonly used for AI/ML. Preferably Python

● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.

● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.

● Strong understanding of machine learning algorithms, statistics, and data structures

● Experience with data preprocessing, data wrangling, and feature engineering

● Knowledge of deep learning architectures, neural networks, and transfer learning

● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment

● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code

● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions

● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders

Read more
Hyderabad
4 - 8 yrs
₹20L - ₹30L / yr
Generative AI
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
+8 more

We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.


Key Responsibilities:

• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.

• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.

• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).

• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.

• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.

• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.

• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.

• Optimize models for performance, scalability, and reliability.

• Maintain documentation and promote knowledge sharing within the team.


Mandatory Requirements:

• 4+ years of relevant experience as an AI/ML Engineer.

• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.

• Experience implementing RAG pipelines and prompt engineering techniques.

• Strong programming skills in Python.

• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).

• Experience with vector databases (FAISS, Pinecone, ChromaDB).

• Strong understanding of SQL and database systems.

• Experience integrating AI solutions into BI tools (Power BI, Tableau).

• Strong analytical, problem-solving, and communication skills. Good to Have

• Experience with cloud platforms (AWS, Azure, GCP).

• Experience with Docker or Kubernetes.

• Exposure to NLP, computer vision, or deep learning use cases.

• Experience in MLOps and CI/CD pipelines

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune, Trivandrum , Thiruvananthapuram
8 - 10 yrs
₹20L - ₹24L / yr
skill iconJava
skill iconPython
API
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+13 more

Job Details

Job Title: Lead Software Engineer - Java, Python, API Development

Industry: Global digital transformation solutions provider

Domain - Information technology (IT)

Experience Required: 8-10 years

Employment Type: Full Time

Job Location: Pune & Trivandrum/ Thiruvananthapuram

CTC Range: Best in Industry

 

Job Description

Job Summary

We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.

 

Key Responsibilities

  • Design, develop, and maintain backend services and APIs using Java and Python
  • Build and optimize Java-based APIs for large-scale data processing
  • Ensure high performance, scalability, and reliability of backend systems
  • Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
  • Collaborate with cross-functional teams to deliver production-ready solutions
  • Lead technical design discussions and guide best practices

 

Requirements

  • 8+ years of experience in backend software development
  • Strong proficiency in Java and Python
  • Proven experience building scalable APIs and data-driven applications
  • Hands-on experience with cloud services and distributed systems
  • Solid understanding of databases, microservices, and API performance optimization

 

Nice to Have

  • Experience with Spring Boot, Flask, or FastAPI
  • Familiarity with Docker, Kubernetes, and CI/CD pipelines
  • Exposure to Kafka, Spark, or other big data tools

 

Skills

Java, Python, API Development, Data Processing, AWS Backend

 

Skills: Java, API development, Data Processing, AWS backend, Python,

 

Must-Haves

Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices

8+ years of experience in backend software development

Strong proficiency in Java and Python

Proven experience building scalable APIs and data-driven applications

Hands-on experience with cloud services and distributed systems

Solid understanding of databases, microservices, and API performance optimization

Mandatory Skills: Java API AND AWS

 

******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: Pune, Trivandrum

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
3 - 5 yrs
Best in industry
skill iconReact.js
skill iconAngular (2+)
skill iconVue.js
skill iconPython
skill iconJava
+11 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.


Roles and Responsibilities:

● Design, develop, and maintain scalable web applications

● Build responsive and high-performance user interfaces

● Develop secure and efficient backend services and APIs

● Collaborate with product managers, designers, and QA teams to deliver features

● Write clean, maintainable, and testable code

● Participate in code reviews and contribute to engineering best practices

● Optimize applications for speed, performance, and scalability

● Troubleshoot and resolve production issues

● Contribute to architectural decisions and technical improvements.


Requirements:

● 3 to 5 years of experience in full-stack development

● Strong proficiency in frontend technologies such as React, Angular, or Vue

● Solid experience with backend technologies such as Node.js, .NET, Java, or Python

● Experience in building RESTful APIs and microservices

● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server

● Experience with version control systems like Git

● Familiarity with CI CD pipelines

● Good understanding of cloud platforms such as AWS, Azure, or GCP

● Strong understanding of software design principles and data structures

● Experience with containerization tools such as Docker

● Knowledge of automated testing frameworks

● Experience working in Agile environments


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
House Of Shipping
Chennai
10 - 14 yrs
₹10L - ₹15L / yr
MuleSoft
Warehouse Management System (WMS)
API
Google Cloud Platform (GCP)
JSON

Key Responsibilities 

  • Lead the design and development of MuleSoft APIs following API-led connectivity principles (System, Process, Experience layers). 
  • Architect and implement complex integrations between OMS/WMS platforms (Manhattan preferred) and external systems including ERP, TMS, marketplaces, and shopping carts. 
  • Drive e-commerce integration initiatives for order ingestion, inventory synchronization, returns processing, and shipment tracking. 
  • Deploy and integrate OMS solutions hosted on Google Cloud Platform, leveraging services such as Cloud Run, Pub/Sub, and Cloud Storage. 
  • Manage Apigee API Gateway configurations, including proxies, policies, authentication, and analytics. 
  • Develop and maintain DataWeave transformations for multi-format data (JSON, XML, CSV, EDI). 
  • Mentor junior MuleSoft developers and enforce best practices for integration design, coding standards, and performance optimization. 
  • Participate in CI/CD pipeline setup and manage automated deployments for MuleSoft applications. 
  • Collaborate with product, architecture, and QA teams to ensure solutions meet business, performance, and security requirements. 
  • Monitor and troubleshoot integration flows to ensure high availability, scalability, and reliability


 

Required Qualifications 

  • Bachelor’s degree in Computer Science, Information Systems, or related field. 
  • 5+ years of experience in MuleSoft Anypoint Platform development (Mule 4). 
  • Proven experience with OMS/WMS integrations (Manhattan preferred) in supply chain or logistics domains. 
  • Strong experience integrating shopping carts and marketplaces (Shopify, Magento, BigCommerce, Amazon, Walmart). 
  • Proficiency in Apigee API Gateway (proxy design, security, analytics). 
  • Experience with Google Cloud Platform services for integration deployments. 
  • Strong DataWeave transformation skills for JSON, XML, CSV, and EDI data mapping. 
  • Expertise in REST/SOAP API design and integration best practices. 
  • Familiarity with B2B EDI transactions (888, 840, 850, 856, 810). 


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Trivandrum, Thiruvananthapuram
9 - 12 yrs
₹21L - ₹27L / yr
skill iconJava
Spring
Apache Kafka
SQL
skill iconPostgreSQL
+16 more

JOB DETAILS:

Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka

Industry: Global Digital Transformation Solutions Provider

Salary: Best in Industry

Experience: 9 to 12 years

Location: Trivandrum, Thiruvananthapuram

 

Job Description

Experience

  • 9+ years of experience in Java-based backend application development
  • Proven experience building and maintaining enterprise-grade, scalable applications
  • Hands-on experience working with microservices and event-driven architectures
  • Experience working in Agile and DevOps-driven development environments

 

Mandatory Skills

  • Advanced proficiency in core Java and enterprise Java concepts
  • Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
  • Strong expertise in SQL, including database design, query optimization, and performance tuning
  • Hands-on experience with PostgreSQL or other relational database management systems
  • Strong experience with Kafka or similar event-driven messaging and streaming platforms
  • Practical knowledge of CI/CD pipelines using GitLab
  • Experience with Jenkins for build automation and deployment processes
  • Strong understanding of GitLab for source code management and DevOps workflows

 

Responsibilities

  • Design, develop, and maintain robust, scalable, and high-performance backend solutions
  • Develop and deploy microservices using Spring or Micronaut frameworks
  • Implement and integrate event-driven systems using Kafka
  • Optimize SQL queries and manage PostgreSQL databases for performance and reliability
  • Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
  • Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
  • Ensure code quality through best practices, reviews, and automated testing

 

Good-to-Have Skills

  • Strong problem-solving and analytical abilities
  • Experience working with Agile development methodologies such as Scrum or Kanban
  • Exposure to cloud platforms such as AWS, Azure, or GCP
  • Familiarity with containerization and orchestration tools such as Docker or Kubernetes

 

Skills: java, spring boot, kafka development, cicd, postgresql, gitlab

 

Must-Haves

Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)

Advanced proficiency in core Java and enterprise Java concepts

Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications

Strong expertise in SQL, including database design, query optimization, and performance tuning

Hands-on experience with PostgreSQL or other relational database management systems

Strong experience with Kafka or similar event-driven messaging and streaming platforms

Practical knowledge of CI/CD pipelines using GitLab

Experience with Jenkins for build automation and deployment processes

Strong understanding of GitLab for source code management and DevOps workflows

 

 

*******

Notice period - 0 to 15 days only

Job stability is mandatory

Location: only Trivandrum

F2F Interview on 21st Feb 2026

 

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Remote only
4 - 6 yrs
₹4L - ₹18L / yr
Google Cloud Platform (GCP)
databricks
Apache Spark

Job Title: Data Engineer – GCP (Fullstack)

Location: Remote (Chennai Preferred)

Shift: Day Shift

Experience: 4+ Years


Role Overview

We are seeking a skilled Data Engineer / Platform Engineer to drive value delivery within cross-functional squads by leveraging strong technical expertise. The role involves designing, building, and supporting scalable data and application solutions using GCP, Databricks, Apache Spark, and cloud-native services, while following Agile and engineering best practices.


Key Responsibilities

  • Design, build, and maintain backend services and APIs using C#, deployed on GCP Cloud Run.
  • Develop and support scalable data and application solutions using Databricks.
  • Implement and manage data governance, security, and lineage using Unity Catalog.
  • Utilize Apache Spark for large-scale data processing and performance optimization.
  • Build, optimize, and maintain robust data pipelines and transformations.
  • Work closely with cross-functional teams in Agile squads for solution delivery.
  • Implement CI/CD pipelines (preferably using Azure DevOps).
  • Manage Infrastructure as Code (IaC) using Terraform on GCP.
  • Work with Firestore (NoSQL) and relational databases like PostgreSQL/MySQL.
  • Perform debugging, troubleshooting, and performance tuning of applications and data workloads.


Required Skills & Expertise

  • 4+ years of experience in Data Engineering / Platform Engineering.
  • Strong hands-on experience with Databricks and Apache Spark.
  • Experience with Unity Catalog for governance and access control.
  • Strong knowledge of GCP services, especially Cloud Run.
  • Proficiency in building REST APIs using C#.
  • Experience with CI/CD pipelines (Azure DevOps preferred).
  • Experience with Terraform (IaC on GCP).
  • Hands-on experience with Firestore and relational databases.
  • Strong analytical, problem-solving, and debugging skills.
  • Experience working in Agile environments.




Read more
Well established Fintech Co.

Well established Fintech Co.

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
8 - 12 yrs
₹30L - ₹35L / yr
skill iconData Science
skill iconPython
Artificial Intelligence (AI)
Google Vertex AI
Google Cloud Platform (GCP)

We are looking for a visionary and hands-on Head of Data Science and AI with at least 6 years of experience to lead our data strategy and analytics initiatives. In this pivotal role, you will take full ownership of the end-to-end technology stack, driving a data-analytics-driven business roadmap that delivers tangible ROI. You will not only guide high-level strategy but also remain hands-on in model design and deployment, ensuring our data capabilities directly empower executive decision-making.

If you are passionate about leveraging AI and Data to transform financial services, we invite you to lead our data transformation journey.

Key Responsibilities

Strategic Leadership & Roadmap

  • End-to-End Tech Stack Ownership: Define, own, and evolve the complete data science and analytics technology stack to ensure scalability and performance.
  • Business Roadmap & ROI: Develop and execute a data analytics-driven business roadmap, ensuring every initiative is aligned with organizational goals and delivers measurable Return on Investment (ROI).
  • Executive Decision Support: Create and present high-impact executive decision packs, providing actionable insights that drive key business strategies.

Model Design & Deployment (Hands-on)

  • Hands-on Development: Lead by example with hands-on involvement in AI modeling, machine learning model design, and algorithm development using Python.
  • Deployment & Ops: Oversee and execute the deployment of models into production environments, ensuring reliability, scalability, and seamless integration with existing systems.
  • Leverage expert-level knowledge of Google Cloud Agentic AI, Vertex AI and BigQuery to build advanced predictive models and data pipelines.
  • Develop business dashboards for various sales channels and drive data driven decision making to improve sales and reduce costs. 

Governance & Quality

  • Data Governance: Establish and enforce robust data governance frameworks, ensuring data accuracy, security, consistency, and compliance across the organization.
  • Best Practices: Champion best practices in coding, testing, and documentation to build a world-class data engineering culture.

Collaboration & Innovation

  • Work closely with Product, Engineering, and Business leadership to identify opportunities for AI/ML intervention.
  • Stay ahead of industry trends in AI, Generative AI, and financial modeling to keep Bajaj Capital at the forefront of innovation.

Must-Have Skills & Experience

Experience:

  • At least 7 years of industry experience in Data Science, Machine Learning, or a related field.
  • Proven track record of applying AI and leading data science teams or initiatives that resulted in significant business impact.

Technical Proficiency:

  • Core Languages: Proficiency in Python is mandatory, with strong capabilities in libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
  • Cloud Data Stack: Expert-level command of Google Cloud Platform (GCP), specifically Agentic AI, Vertex AI and BigQuery.
  • AI & Analytics Stack: Deep understanding of the modern AI and Data Analytics stack, including data warehousing, ETL/ELT pipelines, and MLOps.
  • Visualization: PowerBI in combination with custom web/mobile applications.

Leadership & Soft Skills:

  • Ability to translate complex technical concepts into clear business value for stakeholders.
  • Strong ownership mindset with the ability to manage end-to-end project lifecycles.
  • Experience in creating governance structures and executive-level reporting.

Good-to-Have / Plus

  • Domain Expertise: Prior experience in the BFSI domain (Wealth Management, Insurance, Mutual Funds, or Fintech).
  • Certifications: Google Professional Data Engineer or Google Professional Machine Learning Engineer certifications.
  • Advanced AI: Experience with Generative AI (LLMs), RAG architectures, and real-time analytics.


Read more
Healthcare Industry

Healthcare Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹25L - ₹30L / yr
MLOps
Generative AI
skill iconPython
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
+22 more

JOB DETAILS:

* Job Title: Principal Data Scientist

* Industry: Healthcare

* Salary: Best in Industry

* Experience: 6-10 years

* Location: Bengaluru

 

Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps

 

Criteria:

  1. Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
  2. Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
  3. Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
  4. Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
  5. Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.

 

Job Description

Principal Data Scientist

(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)

 

Job Details

  • Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
  • Location: Hebbal Ring Road, Bengaluru
  • Work Mode: Work from Office
  • Shift: Day Shift
  • Reporting To: SVP
  • Compensation: Best in the industry (for suitable candidates)

 

Educational Qualifications

  • Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
  • Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage

 

Experience Required

  • 7+ years of experience solving real-world problems using:
  • Natural Language Processing (NLP)
  • Automatic Speech Recognition (ASR)
  • Large Language Models (LLMs)
  • Machine Learning (ML)
  • Preferably within the healthcare domain
  • Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable

Role Overview

This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.

We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:

  • Reduce administrative burden in EMR data entry
  • Improve provider satisfaction and productivity
  • Enhance quality of care and patient outcomes

Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.

The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.

 

Key Responsibilities

AI Strategy & Solution Development

  • Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
  • Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
  • Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
  • Design scalable, reusable, and production-ready AI frameworks for speech and text analytics

Model Development & Optimization

  • Fine-tune, train, and optimize large-scale NLP and ASR models
  • Develop and optimize ML algorithms for speech, text, and structured healthcare data
  • Conduct rigorous testing and validation to ensure high clinical accuracy and performance
  • Continuously evaluate and enhance model efficiency and reliability

Cloud & MLOps Implementation

  • Architect and deploy AI models on AWS, Azure, or GCP
  • Deploy and manage models using containerization, Kubernetes, and serverless architectures
  • Design and implement robust MLOps strategies for lifecycle management

Integration & Compliance

  • Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
  • Integrate AI systems with EHR/EMR platforms
  • Implement ethical AI practices, regulatory compliance, and bias mitigation techniques

Collaboration & Leadership

  • Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
  • Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
  • Mentor and lead junior data scientists and engineers
  • Contribute to AI research, publications, patents, and long-term AI strategy

 

Required Skills & Competencies

  • Expertise in Machine Learning, Deep Learning, and Generative AI
  • Strong Python programming skills
  • Hands-on experience with PyTorch and TensorFlow
  • Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
  • Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
  • Experience with text embeddings and vector databases
  • Proficiency in cloud platforms (AWS, Azure, GCP)
  • Experience with LangChain, OpenAI APIs, and RAG architectures
  • Knowledge of agentic AI frameworks and reinforcement learning
  • Familiarity with Docker, Kubernetes, and MLOps best practices
  • Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
  • Strong communication, collaboration, and mentoring skills

 

 

Read more
CLOUDSUFI
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 12 yrs
₹25L - ₹45L / yr
Artificial Intelligence (AI)
Generative AI
Large Language Models (LLM) tuning
Retrieval Augmented Generation (RAG)
Vertex
+2 more

About Us :


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values :


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement :


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.


Role : Lead AI/Senior Engineer-AI


Location : Noida, Delhi/NCR


Experience : 5- 12 years


Education : BTech / BE / MCA / MSc Computer Science


Must Haves :


Conversational AI & NLU :


- Advanced proficiency with Dialogflow CX


- Intent classification, entity extraction, conversation flow design


- Experience building structured dialogue flows with routing logic CCAI platform familiarity


Agentic AI & Multi-Step Reasoning :


- Production experience with Google ADK (or LangChain/LangGraph equivalent)


- Multi-step reasoning and tool orchestration capability


- Tool-use patterns and function calling implementation


RAG Systems & Knowledge Management :


- Hands-on Vertex AI RAG Engine experience (or equivalent)


- Semantic search, chunking strategies, retrieval optimization


- Document processing pipelines (PDF parsing, chunking)


LLM/GenAI & Prompt Engineering :


- Production experience with Gemini models


- Advanced prompt engineering for customer support


- Langfuse experience for prompt management


Google Cloud Platform & Vertex AI :


- Advanced Vertex AI proficiency (Generative AI APIs, Agent Engine)


- Cloud Functions and Cloud Run deployment experience


- BigQuery for conversation analytics


API Integration :


- Genesys Cloud CX integration experience


- REST API design and webhook implementation


- Enterprise authentication patterns (OAuth 2.0)


Good To Have :


Conversational AI & NLU :


- Multi-language support implementation (Spanish/English)


- Telephony integration (speech recognition, TTS, DTMF)


- Barge-in handling and voice optimization


Agentic AI :


- Agent state management and session persistence


- Advanced fallback strategies and error recovery


- Dynamic tool selection and evaluation


RAG Systems :


- Re-ranking and advanced retrieval quality metrics


- Query expansion and context-aware retrieval


- Corpus organization strategies


LLM/GenAI :


- Prompt versioning, A/B testing, iterative refinement


- Prompt injection mitigation strategies


- In-context learning, few-shot, chain-of-thought techniques


LLMOps & Observability :


- Vertex AI Evaluation Service experience


- Groundedness, relevance, coherence, safety metrics


- Trace-level debugging with Cloud Trace


- Centralized logging strategies


Google Cloud :


- Application Integration connectors


- VPC Service Controls and enterprise security


- Cloud Pub/Sub for event-driven systems


Enterprise Integration :


- Third-party AI agent orchestration (SAP Joule, ServiceNow AI, Agentforce)


- Salesforce, SAP, ServiceNow integration patterns


- Context passage strategies for escalations


Architecture & System Design :


- Configuration-driven systems (Meta-Agent patterns)


- Microservices and containerization


- Scalable, multi-tenant system design


- Disaster recovery and failover strategies


Product Quality & KPIs :


- Customer support metrics expertise (CSAT, SSR, escalation rate)


- A/B testing and experimentation frameworks


- User feedback loop implementation


Deliverables :


- Architecture Design : End-to-end platform architecture, data flow diagrams, Dialogflow CX vs. ADK routing decisions


- Conversational Flows : 15+ dialogue flows covering billing, networking, appointments, troubleshooting, and escalations


- ADK Agent Implementation : Complex reasoning agents for technical support, account analysis, and context preparation


- RAG Pipeline : Document processing, chunking configuration, corpus organization (product docs, support articles, policies, promotions)


- Prompt Management : System prompts, Langfuse setup, playbook governance, version control


- Quality Framework : Evaluation pipeline, metrics dashboards, automated assessment, continuous improvement recommendations


- Integration Layer : Genesys handoff, webhook integrations, Application Integration setup, session management


- Testing & Validation : Conversation flow tests, performance testing (latency, throughput, 1000 concurrent users), security validation


- Response time <2 seconds (p95), 99.9% uptime, 1000 concurrent conversations


- Data encryption (TLS 1.2+, AES-256 at rest), PII redaction, 1-year data retention


- Graceful degradation and fallback mechanisms

Read more
Digital transformation excellence provider

Digital transformation excellence provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
12 - 20 yrs
₹30L - ₹40L / yr
Product Management
Business-to-business
Analytics
Product engineering
Procurement management
+26 more

 JOB DETAILS:

* Job Title: Head of Engineering/Senior Product Manager

* Industry: Digital transformation excellence provider

* Salary: Best in Industry

* Experience: 12-20 years

* Location: Mumbai

 

Job Description

Role Overview

The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.

This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.

 

Roles and Responsibilities:

Technology Execution & Architecture Leadership

·        Own and execute the technology roadmap aligned with business goals.

·        Build and maintain scalable architecture supporting multiple verticals.

·        Enforce engineering best practices, code quality, performance, and security.

·        Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.

 

Product & Engineering Delivery

·        Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.

·        Own the full SDLC — requirements, design, development, testing, deployment, support.

·        Implement Agile, DevOps, CI/CD for faster releases and improved reliability.

·        Oversee product/platform interoperability across all company systems.

 

Vertical-Specific Technology Leadership

Procurement Tech:

·        Lead architecture and enhancements of procurement and indirect spend platforms.

·        Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.

 

eCommerce:

·        Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.

 

Sustainability/ESG:

·        Support development of GHG tracking, reporting systems, and sustainability analytics platforms.

 

Business Services:

·        Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.

 

Data, Cloud, Security & Infrastructure

·        Own cloud infrastructure strategy (Azure/AWS/GCP).

·        Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).

·        Lead cybersecurity policies, monitoring, threat detection, and recovery planning.

·        Drive observability, cost optimization, and system scalability.

 

AI, Automation & Innovation

·        Integrate AI/ML, analytics, and automation into product platforms and service delivery.

·        Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.

·        Lead R&D for emerging tech aligned to business needs.

 

Leadership & Team Management

·        Lead and mentor engineering managers, architects, developers, QA, and DevOps.

·        Drive a culture of ownership, innovation, continuous learning, and performance accountability.

·        Build capability development frameworks and internal talent pipelines.

 

Stakeholder Collaboration

·        Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.

·        Ensure transparent reporting on project status, risks, and technology KPIs.

·        Manage vendor relationships, technology partnerships, and external consultants.

 

Education, Training, Skills, and Experience Requirements:

Experience & Background

·        16+ years in technology execution roles, including 5–7 years in senior leadership.

·        Strong background in multi-product engineering for B2B platforms or enterprise systems.

·        Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.

 

Technical Skills

·        Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.

·        Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.

·        Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.

·        Strong understanding of security, compliance, scalability, performance engineering.

 

Leadership Competencies

·        Execution-focused technology leadership.

·        Strong communication and stakeholder management skills.

·        Ability to lead distributed teams, manage complexity, and drive measurable outcomes.

·        Innovation mindset with practical implementation capability.

 

Education

·        Bachelor’s or Master’s in Computer Science/Engineering or equivalent.

·        Additional leadership education (MBA or similar) is a plus, not mandatory.

 

Travel Requirements

·        Occasional travel for client meetings, technology reviews, or global delivery coordination.

 

Must-Haves

·        10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.

·        Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain

·        Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.

·        Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).

·        Hands-on leadership experience in Security & Compliance.

·        Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation

·        Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.

·        Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.

·        Strong product management exposure

·        Proven experience in leading end-to-end team operations

·        Relevant experience in product-driven organizations or platforms

·        Strong Subject Matter Expertise (SME)

 

Education: - Master degree.

 

**************

Joining time / Notice Period: Immediate - 45days.

Location: - Andheri,

5 days working (3 - 2 days’ work from office)

Read more
Virtana

at Virtana

3 candid answers
2 recruiters
Krutika Devadiga
Posted by Krutika Devadiga
Pune
5 - 10 yrs
Best in industry
skill iconPython
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Role Overview:

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.


We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.


Work Location: Pune/ Chennai


Job Type: Hybrid


Role Responsibilities:

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
  • Communicate effectively with people having differing levels of technical knowledge.
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
  • Provide customers with complex application support, problem diagnosis and problem resolution

 

Required Qualifications:

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
  • Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
  •  2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)

 

Desired Qualifications:

  • Prior experience with other virtualization platforms like OpenShift is a plus
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus

 

About Virtana:

Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.

Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
5 - 8 yrs
₹30L - ₹45L / yr
skill iconNodeJS (Node.js)
skill iconJavascript
RabbitMQ
Apache Kafka
skill iconRedis
+14 more

About us:

Trential is engineering the future of digital identity with W3C Verifiable Credentials—secure, decentralized, privacy-first. We make identity and credentials verifiable anywhere, instantly.


We are looking for a Team lead to architect, build, and scale high-performance web applications that power our core products. You will lead the full development lifecycle—from system design to deployment—while mentoring the team and driving best engineering practices across frontend and backend stacks.


 Design & Implement: Lead the design, implementation and management of Trential products.

 Lead by example: Be the most senior and impactful engineer on the team, setting the technical bar through your direct contributions.

 Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.

 Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.

 Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.

 Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

 Experience: 5+ years of experience in software development, with at least 2 years as a Technical Lead.

 Technical Depth: Deep proficiency in JavaScript and experience in building and operating distributed, fault-tolerant systems.

 Cloud & Infrastructure: Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).

 Databases: Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.


Preferred Qualifications (Nice to Have)

 Identity & Credentials: Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)

 Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.

 Experience integrating AI/ML models into verification or data extraction workflows

Read more
Remote only
6 - 10 yrs
₹15L - ₹25L / yr
Cloud Architect
data Architect
skill iconData Analytics
Google Cloud Platform (GCP)
Apache Kafka

Role Summary

Provide architectural leadership across a large-scale, multi-cloud data ecosystem, helping design and guide scalable, future-ready data platforms.


Key Responsibilities

• Design and review data architectures across Google Cloud and Microsoft Azure (multi-cloud).

• Guide decisions around data platforms, pipelines, streaming, and integration patterns.

• Advise on abstraction layers, APIs, messaging/streaming (Kafka, MQ), and system interoperability.

• Partner with engineering teams to ensure designs are practical and executable.


Key Skills

• Deep experience with large-scale data platforms and distributed systems.

• Strong background in multi-cloud architectures (GCP + Azure).

• Expertise in data pipelines, streaming, and enterprise integration.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort