Cutshort logo
Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru)

50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)

Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
appscrip

at appscrip

2 recruiters
Nilam Surti
Posted by Nilam Surti
Bengaluru (Bangalore)
0 - 0 yrs
₹3L - ₹4L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Terraform

Looking for Fresher developers


Responsibilities:

  • Implement integrations requested by customers
  • Deploy updates and fixes
  • Provide Level 2 technical support
  • Build tools to reduce occurrences of errors and improve customer experience
  • Develop software to integrate with internal back-end systems
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate visualization
  • Design procedures for system troubleshooting and maintenance


Requirements and skill:

Knowledge in DevOps Engineer or similar software engineering role

Good knowledge of Terraform, Kubernetes

Working knowledge on AWS, Google Cloud



You can directly contact me on nine three one six one two zero one three two

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Hyderabad
5 - 10 yrs
₹10L - ₹18L / yr
skill iconData Analytics
SQL
databricks
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Position : Senior Data Analyst

Experience Required : 5 to 8 Years

Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)

Shift Timing : 11:00 AM – 8:00 PM IST

Notice Period : Immediate Joiners Only


Job Summary :

We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.

The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.


Key Responsibilities :

  • Lead the design, execution, and delivery of advanced data analysis projects.
  • Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
  • Create and maintain interactive dashboards, reports, and visualizations.
  • Perform root cause analysis and uncover meaningful patterns from large datasets.
  • Present analytical findings to senior leaders and non-technical audiences.
  • Maintain data integrity, quality, and governance in all reporting and analytics solutions.
  • Mentor junior analysts and support their professional development.
  • Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.

Must-Have Skills :

  • Strong proficiency in SQL and Databricks
  • Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
  • Sound understanding of data warehousing concepts and BI best practices

Good-to-Have :

  • Experience with AWS
  • Exposure to machine learning and predictive analytics
  • Industry-specific analytics experience (preferred but not mandatory)
Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Pune, Bengaluru (Bangalore)
1 - 5 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
Strong Site Reliability Engineer (SRE - CloudOps) Profile Mandatory (Experience...
  • Strong Site Reliability Engineer (SRE - CloudOps) Profile
  • Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
  • Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
  • Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
  • Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
  • Mandatory (Company) - B2C Product Companies.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
Google Cloud Platform (GCP)
Strong Senior Unity Developer Profile Mandatory (Experience 1) - Must...
  • Strong Senior Unity Developer Profile
  • Mandatory (Experience 1) - Must have a minimum 2+ years of experience in game/application development using Unity.
  • Mandatory (Experience 2) - Must have strong experience in backend development using C#
  • Mandatory (Experience 3) - Must have strong experience in multiplayer game development with Unity, preferably using Photon Networking (PUN) or Photon Fusion.
  • Mandatory (Company) - B2C Product Companies

Preferred

  • Preferred (Education) - B.E / B.Tech


Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Delhi
3.5 - 10 yrs
₹6L - ₹25L / yr
Google Cloud Platform (GCP)
bigquery
Data flow
Cloud Storage

GCP Data Engineer Job Description

A GCP Data Engineer is responsible for designing, building, and maintaining data pipelines, architectures, and systems on Google Cloud Platform (GCP). Here's a breakdown of the job:


Key Responsibilities

- Data Pipeline Development: Design and develop data pipelines using GCP services like Dataflow, BigQuery, and Cloud Pub/Sub.

- Data Architecture: Design and implement data architectures to meet business requirements.

- Data Processing: Process and analyze large datasets using GCP services like BigQuery and Cloud Dataflow.

- Data Integration: Integrate data from various sources using GCP services like Cloud Data Fusion and Cloud Pub/Sub.

- Data Quality: Ensure data quality and integrity by implementing data validation and data cleansing processes.


Essential Skills

- GCP Services: Strong understanding of GCP services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage.

- Data Engineering: Experience with data engineering concepts, including data pipelines, data warehousing, and data integration.

- Programming Languages: Proficiency in programming languages like Python, Java, or Scala.

- Data Processing: Knowledge of data processing frameworks like Apache Beam and Apache Spark.

- Data Analysis: Understanding of data analysis concepts and tools like SQL and data visualization.

Read more
hirezyai
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 10 yrs
₹12L - ₹25L / yr
AgaroCD
skill iconKubernetes
skill iconDocker
helm
Terraform
+9 more

Job Summary:

We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.

Key Responsibilities:

  • CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
  • Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
  • Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
  • Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
  • Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
  • Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
  • Collaboration: Work closely with development teams to optimize deployments and performance.

Required Skills & Qualifications:

  • Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
  • Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
  • Cloud Platforms: Experience with AWS, GCP, or Azure.
  • Programming & Scripting: Proficiency in Python, Bash, or Go.
  • Version Control: Hands-on with Git and GitOps workflows.
  • Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.

Nice to Have:

  • Experience with Kubernetes Operators, Kustomize, or FluxCD.
  • Exposure to serverless architectures and multi-cloud deployments.
  • Certifications in CKA, AWS DevOps, or similar.


Read more
HeyCoach
DeepanRaj R
Posted by DeepanRaj R
Bengaluru (Bangalore)
4 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconReact.js
Data Structures
Natural Language Processing (NLP)
+5 more


Tech Lead(Fullstack) – Nexa (Conversational Voice AI Platform)

Location: Bangalore Type: Full-time

Experience: 4+ years (preferably in early-stage startups)

Tech Stack: Python (core), Node.js, React.js

 

 

About Nexa

Nexa is a new venture by the founders of HeyCoachPratik Kapasi and Aditya Kamat—on a mission to build the most intuitive voice-first AI platform. We’re rethinking how humans interact with machines using natural, intelligent, and fast conversational interfaces.

We're looking for a Tech Lead to join us at the ground level. This is a high-ownership, high-speed role for builders who want to move fast and go deep.

 

What You’ll Do

●     Design, build, and scale backend and full-stack systems for our voice AI engine

●     Work primarily with Python (core logic, pipelines, model integration), and support full-stack features using Node.js and React.js

●     Lead projects end-to-end—from whiteboard to production deployment

●     Optimize systems for performance, scale, and real-time processing

●     Collaborate with founders, ML engineers, and designers to rapidly prototype and ship features

 ●     Set engineering best practices, own code quality, and mentor junior team members as we grow

 

✅ Must-Have Skills

●     4+ years of experience in Python, building scalable production systems

●     Has led projects independently, from design through deployment

●     Excellent at executing fast without compromising quality

●     Strong foundation in system design, data structures and algorithms

●     Hands-on experience with Node.js and React.js in a production setup

●     Deep understanding of backend architecture—APIs, microservices, data flows

●     Proven success working in early-stage startups, especially during 0→1 scaling phases

●     Ability to debug and optimize across the full stack

●     High autonomy—can break down big problems, prioritize, and deliver without hand-holding

  

🚀 What We Value

●     Speed > Perfection: We move fast, ship early, and iterate

●     Ownership mindset: You act like a founder-even if you're not one

●     Technical depth: You’ve built things from scratch and understand what’s under the hood

●     Product intuition: You don’t just write code—you ask if it solves the user’s problem

●     Startup muscle: You’re scrappy, resourceful, and don’t need layers of process

●     Bias for action: You unblock yourself and others. You push code and push thinking

Humility and curiosity

: You challenge ideas, accept better ones, and never stop learning

 

💡 Nice-to-Have

●     Experience with NLP, speech interfaces, or audio processing

●     Familiarity with cloud platforms (GCP/AWS), CI/CD, Docker, Kubernetes

●     Contributions to open-source or technical blogs

●     Prior experience integrating ML models into production systems

 

Why Join Nexa?

●     Work directly with founders on a product that pushes boundaries in voice AI

●     Be part of the core team shaping product and tech from day one

●     High-trust environment focused on output and impact, not hours

●     Flexible work style and a flat, fast culture

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore), Surat
3 - 5 yrs
₹4.8L - ₹11L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

Job Title: Lead DevOps Engineer

Experience Required: 4 to 5 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹10L - ₹28L / yr
Data modeling
OLAP
OLTP
bigquery
Google Cloud Platform (GCP)

Bangalore / Chennai

  • Hands-on data modelling for OLTP and OLAP systems
  • In-depth knowledge of Conceptual, Logical and Physical data modelling
  • Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
  • Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
  • Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
  • Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
  • People with functional knowledge of the mutual fund industry will be a plus


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.


Required Skills:

● Bachelor’s degree in Computer Science or similar field or equivalent work experience.

● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.

● Expert with data warehousing concepts, strategies, and tools.

● Strong SQL background.

● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.

● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS

● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).

● Knowledge of AWS and Azure Cloud is a plus.

● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.

● Experience in integration using APIs, XML, JSONs etc.

Read more
Gruve
Pune, Bengaluru (Bangalore)
3 - 5 yrs
Upto ₹30L / yr (Varies
)
Retrieval Augmented Generation (RAG)
Generative AI
Chatbot
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.


Key Roles & Responsibilities: 

  • Design and implement software solutions that power machine learning models, particularly in LLMs 
  • Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects 
  • Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges 
  • Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility 
  • Design and implement a scale infrastructure for developing and deploying GenAI solutions 
  • Support model deployment and API integration to ensure interaction with existing enterprise systems.

Basic Qualifications: 

  • A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field 
  • Experience: 3-5 Years 
  • Strong programming skills in Python and Java 
  • Good understanding of machine learning fundamentals 
  • Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn) 
  • Familiar with frontend development and frameworks like React 
  • Basic knowledge of LLMs and transformer-based architectures is a plus.

Preferred Qualifications 

  • Excellent problem-solving skills and an eagerness to learn in a fast-paced environment 
  • Strong attention to detail and ability to communicate technical concepts clearly 
Read more
YOptima Media Solutions Pvt Ltd
Bengaluru (Bangalore)
8 - 11 yrs
₹40L - ₹60L / yr
Generative AI
Google Cloud Platform (GCP)
skill iconPython

Why This Role Matters

  • We are looking for a Staff Engineer to lead the technical direction and hands-on development of our next-generation, agentic AI-first marketing platforms. This is a high-impact role to architect, build, and ship products that change how marketers interact with data, plan campaigns, and make decisions.


What You'll Do

  • Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI
  • Stay hands-on: Design systems, write code, debug, and drive product excellence
  • Lead with depth: Mentor a high-caliber team of full stack engineers.
  • Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback.
  • Own the full stack: From backend data pipelines to intuitive UIs—from Airflow to React - from BigQuery to embeddings.
  • Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP).
  • Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities.


What You Bring

  • 8+ years of experience building and scaling full-stack, data-driven products
  • Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience
  • Strong grasp of data pipelines, analytics, and real-time data processing
  • Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases)
  • Proven architectural leadership and technical ownership
  • Product mindset with a bias for execution and iteration


Our Tech Stack

  • Cloud: Google Cloud Platform
  • Backend: Node.js, Python, Airflow
  • Data: BigQuery, Cloud SQL
  • AI/ML: TensorFlow, OpenAI APIs, custom agents
  • Frontend: React.js


What You Get

  • Meaningful equity in a high-growth startup
  • The chance to build global products from India
  • A culture that values clarity, ownership, learning, humility, and candor
  • A rare opportunity to build with Gen-AI from the ground up


Who You Are

  • You’re initiative-driven, not interruption-driven.
  • You code because you love building things that matter.
  • You enjoy ambiguity and solve problems from first principles.
  • You believe true leadership is contextual, hands-on, and grounded.
  • You’re here to build — not just maintain.
  • You care deeply about seeing your products empower real users, run reliably at scale, and adapt intelligently with minimal manual effort.
  • You know that elegant code is just 30% of the job — the real craft lies in the engineering rigour, edge-case handling, and production resilience that make great products truly dependable.
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
5yrs+
Upto ₹50L / yr (Varies
)
skill iconPython
SQL
Data engineering
Apache Spark
PySpark
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. 

Key Roles & Responsibilities:

  • Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
  • Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
  • Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
  • Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
  • Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
  • Implement data governance, security, and compliance best practices.
  • Build and maintain data models, transformations, and data marts for analytics and reporting.
  • Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
  • Automate infrastructure and deployments using Terraform, Airflow, or dbt.
  • Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
  • Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.


Basic Qualifications:

  • Bachelor’s or Master’s Degree in Computer Science or Data Science.
  • 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
  • Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
  • Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
  • Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
  • Proficiency in SQL, Python, or Scala for data transformation and analytics.
  • Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
  • Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
  • Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
  • Strong understanding of data governance, access control, and encryption strategies.
  • Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.


Preferred Qualifications:

  • Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
  • Experience in BI and analytics tools (Tableau, Power BI, Looker).
  • Familiarity with data observability tools (Monte Carlo, Great Expectations).
  • Experience with machine learning feature engineering pipelines in Databricks.
  • Contributions to open-source data engineering projects.
Read more
The Alter Office

at The Alter Office

2 candid answers
Harsha Ravindran
Posted by Harsha Ravindran
Bengaluru (Bangalore)
1 - 4 yrs
₹6L - ₹10L / yr
skill iconNodeJS (Node.js)
MySQL
SQL
skill iconMongoDB
skill iconExpress
+9 more

Job Title: Backend Developer

Location: In-Office, Bangalore, Karnataka, India


Job Summary:

We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.


Annual Compensation: 6-10 LPA


Responsibilities:

  • Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
  • Architect and implement complex backend solutions, ensuring high availability and performance.
  • Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
  • Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
  • Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
  • Implement and enforce best practices for code quality, security, and performance optimization.
  • Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
  • Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
  • Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
  • Conduct system design reviews and contribute to architectural discussions.
  • Stay updated with industry trends and emerging technologies to drive innovation within the team.
  • Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
  • Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.


Requirements:

  • Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
  • Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
  • Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
  • Practical experience with Redis and caching mechanisms to enhance application performance.
  • Proficient in RESTful API design and development, with a strong understanding of API security best practices.
  • In-depth knowledge of asynchronous programming and event-driven architecture.
  • Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
  • Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
  • Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
  • Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
  • Excellent problem-solving, analytical, and communication skills.
  • Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Read more
Sketchmonk
Sales SketchMonk
Posted by Sales SketchMonk
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹25L / yr
skill iconRust
CI/CD
skill iconDocker
Google Cloud Platform (GCP)

Job Description

We are looking for a passionate and skilled Rust Developer with at least 3 years of experience to join our growing development team. The ideal candidate will be proficient in building robust and scalable APIs using the Rocket framework, and have hands-on experience with PostgreSQL and the Diesel ORM. You will be working on performance-critical backend systems, designing APIs, managing deployments, and collaborating closely with other developers.


Responsibilities

  • Design, develop, and maintain APIs using Rocket.
  • Work with PostgreSQL databases, using Diesel ORM for efficient data access.
  • Write clean, maintainable, and efficient Rust code.
  • Apply object-oriented and functional programming principles effectively.
  • Build and consume RESTful APIs and WebSockets for real-time communication.
  • Handle server-side deployments and assist in managing the infrastructure.
  • Optimize application performance and ensure high availability.
  • Collaborate with frontend developers and DevOps engineers to integrate systems smoothly.
  • Participate in code reviews and technical discussions.
  • Apply knowledge of data structures and algorithms to solve complex problems efficiently.


Requirements

  • 3+ years of experience working with Rust in production environments.
  • Strong hands-on experience with Rocket framework.
  • Solid understanding of Diesel ORM and PostgreSQL.
  • Good grasp of OOP and functional programming concepts.
  • Familiarity with RESTful APIs, WebSockets, and other web protocols.
  • Experience handling application deployments and basic server management.
  • Strong foundation in data structures, algorithms, and software design principles.
  • Ability to write clean, well-documented, and testable code.
  • Good communication skills and ability to work collaboratively.


Package

  • As per Industry standards


Nice to Have

  • Experience with CI/CD pipelines.
  • Familiarity with containerization tools like Docker.
  • Knowledge of cloud platforms (AWS, GCP, etc.).
  • Contribution to open-source Rust projects.
  • Knowledge of basic cryptographic primitives (AES, hashing, etc.).


Perks & Benefits

  • Competitive compensation.
  • Flexible work hours and remote-friendly culture.
  • Opportunity to work with a modern tech stack.
  • Supportive team and growth-oriented environment.


If you're passionate about Rust, love building high-performance systems, and enjoy solving real-world problems with elegant code, we’d love to connect! Apply now and let’s craft powerful backend experiences together! ⚙️🚀

Read more
 France based AI -tech startup

France based AI -tech startup

Agency job
via Recruit Square by Priyanka choudhary
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹17L - ₹30L / yr
skill iconPython
skill iconNodeJS (Node.js)
skill iconMongoDB
Firebase
Google Cloud Platform (GCP)
+1 more

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.


  1. What You’ll Do :
  • Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
  • Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks). 
  • Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
  • Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
  • Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
  • Dev Environments: Set up and manage developer and staging environments across teams.
  • Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
  • Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.


Must-Haves :

  • 5+ years of experience in backend development and cloud infrastructure.
  • Strong expertise in Node.js (TypeScript) and/or Python.
  • Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
  • Deep understanding of cloud platforms, preferably GCP and Firebase.
  • Hands-on experience with CI/CD, DevOps tools, and automation.
  • Solid knowledge of distributed systems and performance tuning.
  • Experience setting up and managing development & staging environments. 

• Proficiency in English and remote communication.


Good to have :

  • Event-driven architecture experience (e.g., Pub/Sub, MQTT).
  • Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
  • Previous work on large-scale SaaS products.
  • Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
  • Experience with edge computing on Nvidia Jetson devices.


What We Offer :

  • Competitive salary for the Indian market (depending on experience).
  • Remote-first culture with async-friendly communication.
  • Autonomy and responsibility from day one.
  • A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
  • A mission-driven company tackling real-world environmental challenges. 


Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote, Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconDocker
skill iconJenkins
Windows Azure
skill iconAmazon Web Services (AWS)
skill iconGitHub
+4 more

Position: Project Manager

Location: Bengaluru, India (Hybrid/Remote flexibility available)

Company: PGAGI Consultancy Pvt. Ltd


About PGAGI

At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.


Position Summary

PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.


Key Responsibilities

• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.

• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.

• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.

• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.

• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.

• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.

• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.

• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.

• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.

Qualifications:

• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.

• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.

• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).

• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.

• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.

• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.

• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.


Why Join PGAGI?

• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.

• Be part of a fast-growing, innovation-driven startup environment.

• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.

• Competitive compensation and performance-based rewards.

• Access to learning resources, mentoring, and AI/DevOps communities.

Read more
Kenscio
Parikshith D B
Posted by Parikshith D B
Bengaluru (Bangalore)
1 - 4 yrs
₹4L - ₹10L / yr
skill iconNodeJS (Node.js)
MySQL
TypeScript
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

A backend developer is an engineer who can handle all the work of databases, servers,

systems engineering, and clients. Depending on the project, what customers need may

be a mobile stack, a Web stack, or a native application stack.


You will be responsible for:


 Build reusable code and libraries for future use.

 Own & build new modules/features end-to-end independently.

 Collaborate with other team members and stakeholders.


Required Skills :


 Thorough understanding of Node.js and Typescript.

 Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.

 Basic architectural understanding of modern day web applications

 Diligence for coding standards

 Must be good with git and git workflow

 Experience of external integrations is a plus

 Working knowledge of AWS or GCP or Azure - Expertise with linux based systems

 Experience with CI/CD tools like jenkins is a plus.

 Experience with testing and automation frameworks.

 Extensive understanding of RDBMS systems

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹17L / yr
skill iconPython
SQL
ETL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

Job Summary:

We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.

Key Responsibilities:

  • Assist in the design, development, and maintenance of scalable and efficient data pipelines.
  • Write clean, maintainable, and performance-optimized SQL queries.
  • Develop data transformation scripts and automation using Python.
  • Support data ingestion processes from various internal and external sources.
  • Monitor data pipeline performance and help troubleshoot issues.
  • Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
  • Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
  • Document technical processes and pipeline architecture.

Core Skills Required:

  • Proficiency in SQL (data querying, joins, aggregations, performance tuning).
  • Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
  • Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
  • Understanding of relational databases and data warehouse concepts.
  • Familiarity with version control systems like Git.

Preferred Qualifications:

  • Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
  • Familiarity with data modeling and data integration concepts.
  • Basic knowledge of CI/CD practices for data pipelines.
  • Bachelor’s degree in Computer Science, Engineering, or related field.


Read more
Talent Pro
Bengaluru (Bangalore)
4 - 8 yrs
₹26L - ₹35L / yr
skill iconJava
skill iconSpring Boot
Google Cloud Platform (GCP)
Distributed Systems
Microservices
+3 more

Role & Responsibilities

Responsible for ensuring that the architecture and design of the platform remains top-notch with respect to scalability, availability, reliability and maintainability

Act as a key technical contributor as well as a hands-on contributing member of the team.

Own end-to-end availability and performance of features, driving rapid product innovation while ensuring a reliable service.

Working closely with the various stakeholders like Program Managers, Product Managers, Reliability and Continuity Engineering(RCE) team, QE team to estimate and execute features/tasks independently.

Maintain and drive tech backlog execution for non-functional requirements of the platform required to keep the platform resilient

Assist in release planning and prioritization based on technical feasibility and engineering constraints

A zeal to continually find new ways to improve architecture, design and ensure timely delivery and high quality.

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹26L / yr
Google Cloud Platform (GCP)
bigquery
Data modeling
Snow flake schema
OLTP
+1 more

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level

6.Data Modeling, GCP Databases, DB Schema(or similar)

7.Hands-on data modelling for OLTP and OLAP systems

8.In-depth knowledge of Conceptual, Logical and Physical data modelling

9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same

10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.

11.Should have working experience on at least one data modelling tool,

preferably DBSchema, Erwin

12Good understanding of GCP databases like AlloyDB, CloudSQL, and

BigQuery.

13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
Wekan Enterprise Solutions
Bengaluru (Bangalore)
7 - 12 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
Microservices
NestJS
skill iconAmazon Web Services (AWS)
+4 more

Backend - Software Development Engineer III


Experience - 7+ yrs


About Wekan Enterprise Solutions


Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.


Job Description


We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?

You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customers technical teams and MongoDB solutions Architects.


Location - Chennai or Bangalore


  • Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
  • Good problem solving skills
  • Strong mentoring capabilities
  • Good understanding of software development life cycle
  • Strong experience in system design and architecture
  • Strong focus on quality of work delivered
  • Excellent verbal and written communication skills


Required Technical Skills


  • Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
  • Min two years of hands-on experience in NestJs
  • Strong experience with Express.Js framework
  • Implementation experience in monolithic and microservices architecture
  • Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
  • Experience integrating with any 3rd party services such as cloud SDKs (Preferable X), payments, push notifications, authentication etc…
  • Hands-on experience with Redis, Kafka, or X
  • Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
  • Strong experience writing and maintaining clear documentation


Good to have skills:


  • Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
  • Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 

Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications  

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
Wekan Enterprise Solutions

at Wekan Enterprise Solutions

2 candid answers
Deepak  N
Posted by Deepak N
Bengaluru (Bangalore), Chennai
12 - 22 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
Microservices
skill iconJavascript
TypeScript
+3 more

Architect


Experience - 12+ yrs


About Wekan Enterprise Solutions


Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.

 

Job Description

We are looking for passionate architects eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?

You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.

Location - Chennai or Bangalore


●     Relevant experience of 12+ years building high-performance applications with at least 3+ years as an architect.

●     Good problem solving skills

●     Strong mentoring capabilities

●     Good understanding of software development life cycle

●     Strong experience in system design and architecture

●     Strong focus on quality of work delivered

●     Excellent verbal and written communication skills

 

Required Technical Skills

 

● Extensive hands-on experience building high-performance applications using Node.Js (Javascript/Typescript) and .NET/ Golang / Java / Python.

● Strong experience with appropriate framework(s).

● Wellversed in monolithic and microservices architecture.

● Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases

● Experience working with 3rd party integrations ranging from authentication, cloud services, etc.

● Hands-on experience with Kafka or RabbitMQ.

● Handsonexperience with CI/CD pipelines and atleast 1 cloud provider- AWS / GCP / Azure

● Strong experience writing and maintaining clear documentation

  

Good to have skills:

 

●     Experience working with frontend technologies - React.Js or Vue.Js or Angular.

●     Extensive experience consulting with customers directly for defining architecture or system design.

●     Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies

Read more
GreenStitch Technologies PVT LTD
Paridhi Mudgal
Posted by Paridhi Mudgal
Bengaluru (Bangalore)
3 - 9 yrs
₹15L - ₹25L / yr
skill iconSpring Boot
skill iconJava
skill iconPython
Google Cloud Platform (GCP)

Apply Link - https://tally.so/r/wv0lEA


Key Responsibilities:

  1. Software Development:
  • Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
  • Contribute to the development of micro services, APIs, or UI components as per the project requirements.
  1. System Architecture:
  • Collaborate and design and enhance system architecture.
  • Analyse and identify opportunities for performance improvements and scalability.
  1. Code Reviews and Mentorship:
  • Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
  • Mentor and support junior developers, fostering a culture of learning and growth.
  1. Agile Collaboration:
  • Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
  • Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
  1. Problem-Solving:
  • Investigate, troubleshoot, and resolve complex issues in production and development environments.
  • Contribute to incident management and root cause analysis to improve system reliability.
  1. Continuous Improvement:
  • Stay up-to-date with emerging technologies and industry trends.
  • Propose and implement improvements to existing codebases, tools, and development processes.

Qualifications:

Must-Have:

  • Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
  • Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • Technical Skills:
  • Strong proficiency in [programming languages/frameworks/tools].
  • Experience with cloud platforms like AWS, Azure, or GCP.
  • Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
  • Understanding of data structures, algorithms, and system design principles.

Nice-to-Have:

  • Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
  • Knowledge of database technologies (SQL and NoSQL).

Soft Skills:

  • Strong analytical and problem-solving skills.
  • Excellent written and verbal communication skills.
  • Ability to work in a fast-paced environment and manage multiple priorities effectively.
Read more
Digital solutions

Digital solutions

Agency job
via HR Central by Melrose Savia Pinto
Bengaluru (Bangalore), Mumbai
2 - 5 yrs
₹12L - ₹18L / yr
Field Sales
Startups
Google Cloud Platform (GCP)
AWS CloudFormation
Sales presentations
+1 more

Overview: We’re seeking a dynamic and results-oriented Field Sales Manager focused on selling innovative cloud-native technology solutions, including modernization, analytics, AI/ML, and Generative AI, specifically within India's vibrant startup ecosystem. If you’re motivated by fast-paced environments, adept at independently generating opportunities, and excel at closing deals, we'd love to connect with you.


Role Description: In this role, you'll independently identify and engage promising startups, execute strategic go-to-market plans, and build meaningful relationships across the AWS startup ecosystem. You’ll work closely with internal pre-sales and solutions teams to position and propose cloud-native solutions effectively, driving significant customer outcomes.


Key Responsibilities:  


Identify, prospect, and generate qualified pipeline opportunities independently and through collaboration with the AWS startup ecosystem.

• Conduct comprehensive discovery meetings to qualify potential opportunities, aligning customer needs with our cloud-native solutions

• Collaborate closely with the pre-sales team to develop compelling proposals, presentations, and solution demonstrations.

• Lead end-to-end sales processes, from prospecting and qualification to negotiation and deal closure.

• Build and nurture strong relationships within the startup community, including founders, CTOs, venture capitalists, accelerators, and AWS representatives.

• Stay informed about emerging trends, competitive offerings, and market dynamics within cloud modernization, analytics, AI/ML, and Generative AI.

• Maintain accurate CRM updates, track sales metrics, and regularly report performance and pipeline status to leadership. 


Qualifications & Experience:  


• BE/BTech/MCA/ME/MTech Only

3-6 years of proven experience in technology field sales, ideally in cloud solutions, analytics, AI/ML, or digital transformation.

• Prior experience selling technology solutions directly to startups or growth-stage companies.

• Demonstrated ability to independently manage end-to-end sales cycles with strong results.

• Familiarity and understanding of AWS ecosystem and cloud-native architectures are highly preferred.

• Excellent relationship-building skills, along with exceptional communication, negotiation, and presentation abilities.

• Ability and willingness to travel as needed to customer sites, industry events, and partner meetings.

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Jaipur, Bhopal, Gurugram
5 - 11 yrs
₹30L - ₹40L / yr
skill iconScala
Microservices
CI/CD
DevOps
skill iconAmazon Web Services (AWS)
+2 more

Dear,


We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.


📌 Job Details:

  • Role: Senior Backend Engineer
  •  Shift: 1 PM – 10 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or up to 30 days


🔹 Job Responsibilities:


✅ Design and develop scalable, reliable, and maintainable backend solutions

✅ Work on event-driven microservices architecture

✅ Implement REST APIs and optimize backend performance

✅ Collaborate with cross-functional teams to drive innovation

✅ Mentor junior and mid-level engineers


🔹 Required Skills:


✔ Backend Development: Scala (preferred), Java, Kotlin

✔ Cloud: AWS or GCP

✔ Databases: MySQL, NoSQL (Cassandra)

✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code

✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch

✔ Agile Methodologies: Scrum, Kanban


⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Wekan Enterprise Solutions
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
NestJS
TypeScript
Microservices
+5 more

Backend - Software Development Engineer II

 

Experience - 4+ yrs 

 

About Wekan Enterprise Solutions


Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.

Job Description

We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?

You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.

Location - Bangalore

Basic qualifications:


  • Good problem solving skills
  • Deep understanding of software development life cycle
  • Excellent verbal and written communication skills
  • Strong focus on quality of work delivered
  • Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies

 

Required Technical Skills:


  • Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
  • Hands-on project experience with Nest.Js
  • Strong experience with Express.Js framework
  • Hands-on experience in data modeling and schema design in MongoDB 
  • Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
  • Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
  • Strong experience writing and maintaining clear documentation

 

Good to have skills:

  • Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
  • Experience with microservice architecture
  • Experience working with other Relational and NoSQL Databases
  • Experience with technologies such as Kafka and Redis
  • Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


Read more
Appz global Tech Pvt Ltd
Bengaluru (Bangalore), Pune
6 - 9 yrs
₹18L - ₹25L / yr
JPA
Google Cloud Platform (GCP)
06692

 Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀


We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!


📌 Role: Senior Java Developer

📌 Experience: 6 to 9 Years

📌 Education: BE/BTech/MCA (Full-time)

📌 Location: Bangalore (Hybrid)

📌 Notice Period: Immediate Joiners Only


✅ Mandatory Skills:


🔹 Strong Core Java

🔹 Spring Boot (data flow basics)

🔹 JPA

🔹 Google Cloud Platform (GCP)

🔹 Spring Framework

🔹 Docker, Kubernetes (Good to have)

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
3 - 8 yrs
₹15L - ₹30L / yr
Product engineering
skill iconRuby on Rails (ROR)
skill iconPostgreSQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+2 more

Position Name : Product Engineer (Backend Heavy)

Experience : 3 to 5 Years

Location : Bengaluru (Work From Office, 5 Days a Week)

Positions : 2

Notice Period : Immediate joiners or candidates serving notice (within 30 days)


Role Overview :

We’re looking for Product Engineers who are passionate about building scalable backend systems in the FinTech & payments domain. If you enjoy working on complex challenges, contributing to open-source projects, and driving impactful innovations, this role is for you!


What You’ll Do :

  • Develop scalable APIs and backend services.
  • Design and implement core payment systems.
  • Take end-to-end ownership of product development from zero to one.
  • Work on database design, query optimization, and system performance.
  • Experiment with new technologies and automation tools.
  • Collaborate with product managers, designers, and engineers to drive innovation.

What We’re Looking For :

  • 3+ Years of professional backend development experience.
  • Proficiency in any backend programming language (Ruby on Rails experience is a plus but not mandatory).
  • Experience in building APIs and working with relational databases.
  • Strong communication skills and ability to work in a team.
  • Open-source contributions (minimum 50 stars on GitHub preferred).
  • Experience in building and delivering 0 to 1 products.
  • Passion for FinTech & payment systems.
  • Familiarity with CI/CD, DevOps practices, and infrastructure management.
  • Knowledge of payment protocols and financial regulations (preferred but not mandatory)

Main Technical Skills :

  • Backend : Ruby on Rails, PostgreSQL
  • Infrastructure : GCP, AWS, Terraform (fully automated infrastructure)
  • Security : Zero-trust security protocol managed via Teleport
Read more
TOP MNC

TOP MNC

Agency job
via TCDC by Sheik Noor
Bengaluru (Bangalore), Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
6 - 10 yrs
₹10L - ₹21L / yr
skill iconJava
06692
Google Cloud Platform (GCP)

Java Developer with GCP

Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API, 

EXP : SA(6-10 Years)

Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata

Np : Immediate to 60 Days.


Kindly share your updated resume via WA - 91five000260seven

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
Bottle Lab Technologies Pvt Ltd

at Bottle Lab Technologies Pvt Ltd

1 video
3 recruiters
Agency job
via maple green services by Elvin Johnson
Bengaluru (Bangalore)
6 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
RESTful APIs
FastAPI
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+1 more

About SmartQ


A leading B2B Food-Tech company built on 4 pillars-great people, great food, great experience, and greater good. Solving complex business problems with our heart and analyzing possible solutions with our mind lie in our DNA. We are on the perpetual route of serving our clients wholeheartedly. Armed with the stability of an MNC and the agility of a start-up, we have spread across 17 countries, having collaborated and executed successfully with 600 clients. We have grown from strength to strength with a blend of exuberant youth and exceptional experience. Bengaluru, being our headquarters, is known as the innovation hub and we have grown up to be the global leader in the institutional food tech space. We were recently acquired by the world's largest foodservice company – Compass group which has an annual turnover of 20 billion USD.  


In this role, you will:


1. Collaborate with Product & Design Teams Work closely with the Product team to ensure that we are building a scalable, bug-free platform. You will actively participate in product and design discussions, offering valuable insights from a backend perspective to align technology with business goals.

2. Drive Adoption of New Technologies

You will lead brainstorming sessions and define a clear direction for the backend team to incorporate the latest technologies into day-to-day development,continuously optimizing for performance, scalability, and efficiency.

3. RESTful API Design & Development:You will ensure that the APIs you design and develop are well-structured, following best practices, and are suitable for consumption by frontend teams across multiple platforms. A key part of your role is making sure these APIs are scalable and maintainable.

4. Third-Party Integration Support:As we sometimes partner with third-party providers to expedite our market entry,you’ll work closely with these partners to integrate their solutions into our system.This involves participating in calls, finding the best integration methods, and providing ongoing support.

5. AI and Prompt Engineering:With AI becoming more integral to backend development, you’ll leverage AI to speed up development processes and maintain best practices. Familiarity with prompt engineering and AI-driven problem-solving is a significant plus in our team.



Must-Have Requirements:

  • Strong expertise in Python, microservices, backend development and scalable architectures.
  • Proficiency in designing and building REST APIs.
  • Experience with unit testing in any testing framework and maintaining 100% code coverage.
  • Experience in working with NoSQL DB.
  • Strong understanding of any Cloud platforms such as - GCP/AWS/Azure.
  • Profound knowledge in Serverless design pattern .
  • Familiarity with Django or Webapp2 or Flask or similar web app frameworks.
  • Experience in writing unit test using any testing framework.
  • Experience collaborating with product and design teams.
  • Familiarity with integrating third-party solutions.


Good-to-Have Requirements:


  • Educational background includes a degree (B.E/B.Tech/M.Tech) in ComputerScience, Engineering, or a related field.  
  • 4+ years’ experience as a backend/cloud developer.
  • Good understanding of google cloud platform.
  • Knowledge of AI and how to leverage it for day-to-day tasks in backend development.
  • Familiarity with prompt engineering to enhance productivity.
  • Prior experience working with global or regional teams.
  • Experience with agile methodologies and working within cross-functional teams.
Read more
Bengaluru (Bangalore), Gurugram
8 - 20 yrs
₹10L - ₹80L / yr
Blockchain
Web3js
Smart Contracts
DAPP (Decentralized Applications)
Ethereum
+10 more

Job Title : Chief Technology Officer (CTO) – Blockchain & Web3

Location : Bangalore & Gurgaon

Job Type : Full-Time, On-Site

Working Days : 6 Days


About the Role :

  • We are seeking an experienced and visionary Chief Technology Officer (CTO) to lead our Blockchain & Web3 initiatives.
  • The ideal candidate will have a strong technical background in Blockchain, Distributed Ledger Technology (DLT), Smart Contracts, DeFi, and Web3 applications.
  • As a CTO, you will be responsible for defining and implementing the technology roadmap, leading a high-performing tech team, and driving innovation in the Blockchain and Web3 space.


Key Responsibilities :

  • Define and execute the technical strategy and roadmap for Blockchain & Web3 products and services.
  • Lead the architecture, design, and development of scalable, secure, and efficient blockchain-based applications.
  • Oversee Smart Contract development, Layer-1 & Layer-2 solutions, DeFi, NFTs, and decentralized applications (dApps).
  • Manage and mentor a team of engineers, developers, and blockchain specialists to ensure high-quality product delivery.
  • Drive R&D initiatives to stay ahead of emerging trends and advancements in Blockchain & Web3 technologies.
  • Collaborate with cross-functional teams including Product, Marketing, and Business Development to align technology with business goals.
  • Ensure regulatory compliance, security, and scalability of Blockchain solutions.
  • Build and maintain relationships with industry partners, investors, and technology vendors to drive innovation.


Required Qualifications & Experience :

  • 10+ Years of overall experience in software development with at least 5+ Years in Blockchain & Web3 technologies.
  • Deep understanding of Blockchain protocols (Ethereum, Solana, Polkadot, Hyperledger, etc.), consensus mechanisms, cryptographic principles, and tokenomics.
  • Hands-on experience with Solidity, Rust, Go, Node.js, Python, or other blockchain programming languages.
  • Proven track record of building and scaling decentralized applications (dApps), DeFi platforms, or NFT marketplaces.
  • Experience with cloud infrastructure (AWS, Azure, GCP) and DevOps best practices.
  • Strong leadership and management skills with experience in building and leading high-performing teams.
  • Excellent problem-solving skills with the ability to work in a fast-paced, high-growth environment.
  • Strong understanding of Web3, DAOs, Metaverse, and the evolving regulatory landscape.

Preferred Qualifications :

  • Prior experience in a CTO, VP Engineering, or similar leadership role.
  • Experience in fundraising, investor relations, and strategic partnerships.
  • Knowledge of cross-chain interoperability and Layer-2 scaling solutions.
  • Understanding of data privacy, security, and compliance regulations related to Blockchain & Web3.
Read more
Bengaluru (Bangalore)
5 - 12 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+10 more

Company Overview 

Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge. 

  

Position Overview 

We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability. 

  

Key Responsibilities 

  • Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices. 
  • Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency. 
  • Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users. 
  • Implement and manage CI/CD pipelines to automate software delivery and ensure code quality. 
  • Manage and configure cloud-based infrastructure services to optimize performance and cost. 
  • Collaborate with development teams to design and implement scalable, reliable, and secure applications. 
  • Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues. 
  • Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data. 
  • Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability. 
  • Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability. 

  

Qualifications 

  • Bachelor's degree in Computer Science, Engineering, or related field 
  • Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role 
  • Strong understanding of cloud computing principles and experience with AWS 
  • Experience of building and supporting complex CI/CD pipelines using Github 
  • Experience of building and supporting infrastructure as a code using Terraform 
  • Proficiency in scripting and automating tools 
  • Solid understanding of networking concepts and protocols 
  • Understanding of security best practices and experience implementing security controls in cloud environments 
  • Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage. 
Read more
Mumbai, Bengaluru (Bangalore)
5 - 6 yrs
₹15L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

Lightning Job By Cutshort ⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Job Overview:


We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.


Responsibilities:


• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.

• Deploy and manage Kubernetes clusters using AWS.

• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.

• Monitor system performance using Datadog, ELK, and Cloudflare tools.

• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.

• Collaborate with development teams to design, implement and test infrastructure changes.

• Troubleshoot and resolve infrastructure issues as they arise.

• Participate in on-call rotation and provide support for production issues.


Qualifications:

• Bachelor's or Master's degree in Computer Science, Engineering or a related field.

• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.

• Strong understanding of Linux administration and shell scripting.

• Experience with automation tools such as Terraform, Ansible, or similar.

• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.

• Experience with container orchestration platforms such as Kubernetes.

• Familiarity with container technologies such as Docker.

• Experience with cloud providers such as AWS.

• Experience with monitoring tools such as Datadog and ELK.



Skills:

• Strong analytical and problem-solving skills.

• Excellent communication and collaboration skills.

• Ability to work independently or in a team environment.

• Strong attention to detail.

• Ability to learn and apply new technologies quickly.

• Ability to work in a fast-paced and dynamic environment.

• Strong understanding of DevOps principles and methodologies.


Kindly apply at https://www.wohlig.com/careers

Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

About the Role:

We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.


Key Responsibilities:


Cloud Management:

  • Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
  • Ensure high availability, scalability, and security of cloud resources.

Containerization & Orchestration:

  • Develop and manage containerized applications using Docker.
  • Deploy, scale, and manage Kubernetes clusters.

CI/CD Pipelines:

  • Build and maintain robust CI/CD pipelines to automate the software delivery process.
  • Implement monitoring and alerting to ensure pipeline efficiency.

Version Control & Collaboration:

  • Manage code repositories and workflows using Git.
  • Collaborate with development teams to optimize branching strategies and code reviews.

Automation & Scripting:

  • Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
  • Write scripts to optimize and maintain workflows.

Monitoring & Logging:

  • Implement and maintain monitoring solutions to ensure system health and performance.
  • Analyze logs and metrics to troubleshoot and resolve issues.


Required Skills & Qualifications:

  • 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
  • Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
  • Hands-on experience building and managing CI/CD pipelines.
  • Proficient in using Git for version control.
  • Experience with scripting languages such as Bash, Python, or PowerShell.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Solid understanding of networking, security, and system administration.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and teamwork skills.


Preferred Qualifications:

  • Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with serverless architectures and microservices.


Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
skill iconReact.js
TypeScript
Google Cloud Platform (GCP)
Koa.js
skill iconMongoDB
+3 more

A niche, specialist position in an interdisciplinary team focused on end-to-end solutions. Nature of projects range from proof-of-concept innovative applications, parallel implementations per end user requests, scaling up and continuous monitoring for improvements. Majority of the projects will be focused on providing automation solutions via both custom solutions and adapting machine learning generic standards to specific use cases/domains.


Clientele includes major publishers from the US and Europe, pharmaceutical bigwigs and government funded projects.


As a Senior Fullstack Developer, you will be responsible for designing, building, and maintaining scalable and performant web applications using modern technologies. You will work with cutting-edge tools and cloud infrastructure (primarily Google Cloud) and implement robust back-end services with React JS with Typescript, Koa.js, MongoDB, and Redis, while ensuring reliable and efficient monitoring with OpenTelemetry and logging with Bunyan. Your expertise in CI/CD pipelines and modern testing frameworks will be key to maintaining a smooth and efficient software development lifecycle.

Key Responsibilities:

  • Fullstack Development: Design, develop, and maintain web applications using JavaScript (Node.js for back-end and React.js with Typescript for front-end).
  • Cloud Infrastructure: Leverage Google Cloud services (like Compute Engine, Cloud Storage, Pub/Sub, etc.) to build scalable and resilient cloud solutions.
  • API Development: Implement RESTful APIs and microservices with Koa.js, ensuring high performance, security, and scalability.
  • Database Management: Manage MongoDB databases for storing and retrieving application data, and use Redis for caching and session management.
  • Logging and Monitoring: Utilize Bunyan for structured logging and OpenTelemetry for distributed tracing and monitoring to ensure system health and performance.
  • CI/CD: Design, implement, and maintain efficient CI/CD pipelines for continuous integration and deployment, ensuring fast and reliable code delivery.
  • Testing & Quality Assurance: Write unit and integration tests using Jest, Mocha, and React Testing Library to ensure code reliability and maintainability.
  • Collaboration: Work closely with front-end and back-end engineers to deliver high-quality software solutions, following agile development practices.
  • Optimization & Scaling: Identify performance bottlenecks, troubleshoot production issues, and scale the system as needed.
  • Code Reviews & Mentorship: Conduct peer code reviews, share best practices, and mentor junior developers to improve team efficiency and code quality.

Must-Have Skills:

  • Google Cloud (GCP): Hands-on experience with various Google Cloud services (Compute Engine, Cloud Storage, Pub/Sub, Firestore, etc.) for building scalable applications.
  • React.js: Strong experience in building modern, responsive user interfaces with React.js and Typescript
  • Koa.js: Strong experience in building web servers and APIs with Koa.js.
  • MongoDB & Redis: Proficiency in working with MongoDB (NoSQL databases) and Redis for caching and session management.
  • Bunyan: Experience using Bunyan for structured logging and tracking application events.
  • OpenTelemetry Ecosystem: Hands-on experience with the OpenTelemetry ecosystem for monitoring and distributed tracing.
  • CI/CD: Proficient in setting up CI/CD pipelines using tools like CircleCI, Jenkins, or GitLab CI.
  • Testing Frameworks: Solid understanding and experience with Jest, Mocha, and React Testing Library for testing both back-end and front-end applications.
  • JavaScript & Node.js: Strong proficiency in JavaScript (ES6+), and experience working with Node.js for back-end services.


Desired Skills & Experience:

  • Experience with other cloud platforms (AWS, Azure).
  • Familiarity with containerization and orchestration tools like Docker and Kubernetes.
  • Experience working with TypeScript.
  • Knowledge of other logging and monitoring tools.
  • Familiarity with agile methodologies and project management tools (JIRA, Trello, etc.).

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 5-10 years of hands-on experience as a Fullstack Developer.
  • Strong problem-solving skills and ability to debug complex systems.
  • Excellent communication skills and ability to work in a team-oriented, collaborative environment.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.

 

Primary Skills:

  1. AWS or GCP Cloud
  2. DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
  3. Python/Bash/PowerShell scripting

 

Secondary Skills:

  1. Docker or Kubernetes


Read more
Surat, Bengaluru (Bangalore)
0 - 1 yrs
₹1.8L - ₹4.8L / yr
AWS
Google Cloud Platform (GCP)
Windows Azure

 Key Responsibilities


AI Model Development 

 - Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.

 - Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.

  

Backend Development with FastAPI

 - Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.

 - Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.

  

Pipeline and Integration

 - Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.

 - Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.


Collaboration with Cross-Functional Teams

 - Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.

 - Work with front-end developers to integrate AI-powered functionalities into web applications.


Model Optimization and Fine-Tuning

 - Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.

 - Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.


Documentation and Code Quality

 - Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.

 - Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.


Research and Innovation

 - Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.

 - Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.


Required Skills and Experience


Expertise in Generative AI 

Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).

LangChain & LlamaIndex 

Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.

Python Programming 

Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.

API Development with FastAPI 

Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.

NLP & Machine Learning 

Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.

Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.

Version Control & CI/CD 

Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.

  

Preferred Skills


Containerization & Cloud Deployment 

Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.

Data Engineering 

Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.

Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.

Read more
Someshwara Software
Bengaluru (Bangalore)
2 - 3 yrs
₹4L - ₹7L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.


Responsibilities:

• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.

• Configure and manage EC2 instances to meet application requirements.

• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.

• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.

• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.

• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.

• Implement and monitor S3 storage solutions for secure and scalable data storage

• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.

• Configure Route 53 for domain management, DNS routing, and failover configurations.

• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.

• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.

• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.



Qualifications:

• Bachelor's degree in computer science, Information Technology, or a related field.

• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.

• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.

• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.

• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.

• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.

• Strong communication skills with the ability to collaborate effectively with cross-functional teams.

• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.


Additional Information:

• We value creativity, innovation, and a proactive approach to problem-solving.

• We offer a collaborative and supportive work environment where your ideas and contributions are valued.

• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.


We celebrate diversity and are dedicated to creating an inclusive environment for all employees.

 

Read more
NASDAQ listed, Service Provider IT Company

NASDAQ listed, Service Provider IT Company

Agency job
via CaptiveAide Advisory Pvt Ltd by Abhishek Dhuria
Bengaluru (Bangalore), Hyderabad
12 - 15 yrs
₹45L - ₹48L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Cloud Computing
Microservices
+11 more

Job Summary:

As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.


Key Responsibilities:

  • Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
  • Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
  • Develop and maintain cloud-native solutions leveraging services from various cloud providers.
  • Architect and deploy microservices using REST, GraphQL to support our application development needs.
  • Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
  • Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
  • Implement robust security practices and policies to protect cloud environments and data.
  • Design and implement data management strategies, including data governance, data integration, and data security.
  • Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
  • Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
  • Optimize cost and performance across different cloud environments.


Qualifications/ Experience & Skills Required:

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Experience: 10 - 15 Years
  • Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
  • Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
  • Strong knowledge of cloud-native solutions and microservices architecture.
  • Proficiency in using GraphQL for designing and implementing APIs.
  • Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
  • Experience with DevOps practices and tools, including CI/CD pipelines.
  • Excellent problem-solving skills and ability to troubleshoot complex issues.
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment.
  • Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
  • Experience with data management, including data governance, data integration, and data security.


Preferred Skills:

  • Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
  • Experience with containerization technologies (Docker, Kubernetes).
  • Familiarity with cloud cost management and optimization tools.
Read more
Cargill Business Services
Vignesh R
Posted by Vignesh R
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Job Purpose and Impact

The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.

Key Accountabilities

  • Collaborate with internal and external partners to understand and evaluate business requirements.
  • Implement modern engineering practices to ensure product quality.
  • Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
  • Write well-designed, testable and efficient code using full-stack engineering capability.
  • Integrate software components into a fully functional software system.
  • Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
  • Proficiency in at least one configuration management or orchestration tool, such as Ansible.
  • Experience with cloud monitoring and logging services.

Qualifications

Minimum Qualifications

  • Bachelor's degree in a related field or equivalent exp
  • Knowledge of public cloud services & application programming interfaces
  • Working exp with continuous integration and delivery practices

Preferred Qualifications

  • 3-5 years of relevant exp whether in IT, IS, or software development
  • Exp in:
  •  Code repositories such as Git
  • Scripting languages (Python & PowerShell)
  • Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
  • Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
  • Databases such as Postgres, SQL, Elastic
Read more
Bengaluru (Bangalore)
15 - 25 yrs
₹3L - ₹20L / yr
Channel Sales
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
SaaS
+1 more

Job Description - Manager Sales

Min 15 years experience,

Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,

Team Management experience, leading cloud business including teams

Sales manager - Cloud Solutions

Reporting to Sr Management

Good personality

Distribution backgroung

Keen on Channel partners

Good database of OEMs and channel partners.

Age group - 35 to 45yrs

Male Candidate

Good communication

B2B Channel Sales

Location - Bangalore


If interested reply with cv and below details


Total exp -

Current ctc - 

Exp ctc - 

Np -

Current location - 

Qualification - 

Total exp Channel Sales -

What are the Cloud IT products, you have done sales for? 

What is the Annual revenue generated through Sales ?

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Seema Srivastava
Posted by Seema Srivastava
Mumbai, Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+4 more

Experience: 5+ Years


• Experience in Core Java, Spring Boot

• Experience in microservices and angular

• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.

• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.

• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2

• Good knowledge of multi-threading

• Basic working knowledge of Unix/Linux

• Excellent problem solving and coding skills in Java

• Strong interpersonal, communication and analytical skills.

• Should be able to express their design ideas and thoughts

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
2 - 6 yrs
Best in industry
Terraform
skill iconPython
Linux/Unix
Infrastructure
skill iconDocker
+5 more

GCP Cloud Engineer:

  • Proficiency in infrastructure as code (Terraform).
  • Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
  • Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
  • Design Disaster Recovery and backup strategies to meet application objectives.
  • Working knowledge of Google Cloud
  • Working knowledge of various tools, open-source technologies, and cloud services
  • Experience working on Linux based infrastructure.
  • Excellent problem-solving and troubleshooting skills


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Mumbai, Pune
7 - 20 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
Google Cloud Platform (GCP)
Migration

Job Title: .NET Developer with Cloud Migration Experience

Job Description:

We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.

Responsibilities:

  • Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
  • Collaborate with cross-functional teams to define, design, and ship new features
  • Participate in code reviews and ensure coding best practices are followed
  • Work closely with the infrastructure team to migrate on-premise applications to the cloud
  • Troubleshoot and debug issues that arise during migration and post-migration phases
  • Stay updated with the latest trends and technologies in .NET development and cloud computing

Requirements:

  • Bachelor's degree in Computer Science or related field
  • X+ years of experience in .NET development using C#, MVC, and ASP.NET
  • Hands-on experience with cloud migration projects, preferably with Azure or AWS
  • Strong understanding of cloud computing concepts and principles
  • Experience with database technologies such as SQL Server
  • Excellent problem-solving and communication skills

Preferred Qualifications:

  • Microsoft Azure or AWS certification
  • Experience with other cloud platforms such as Google Cloud Platform (GCP)
  • Familiarity with DevOps practices and tools


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort