Cutshort logo
Google Cloud Platform (GCP) Jobs in Pune

50+ Google Cloud Platform (GCP) Jobs in Pune | Google Cloud Platform (GCP) Job openings in Pune

Apply to 50+ Google Cloud Platform (GCP) Jobs in Pune on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Fractal Analytics

at Fractal Analytics

5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Hyderabad, Gurugram, Noida, Pune, Mumbai, Chennai, Coimbatore
4yrs+
Best in industry
Generative AI
skill iconMachine Learning (ML)
LLMOps
Large Language Models (LLM) tuning
Open-source LLMs
+15 more

Role description:

You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.


Required skills:

  • 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
  • Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
  • Should have worked on proprietary and open source large language models
  • Experience on LLM fine tuning, creating distilled model from hosted LLMs
  • Building data pipelines for model training
  • Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
  • Experience in GenAI application deployment on cloud and on-premise at scale for production
  • Experience in creating CI/CD pipelines
  • Working knowledge on Kubernetes
  • Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
  • Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
  • Experience in light weight UI development using streamlit or chainlit (optional)
  • Desired experience on open-source tools for ML development, deployment, observability and integration
  • Background on DevOps and MLOps will be a plus
  • Experience working on collaborative code versioning tools like GitHub/GitLab
  • Team player with good communication and presentation skills
Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
7 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
ETL
skill iconPython
skill iconJava
skill iconScala
+4 more

About Data Axle:

Data Axle Inc.  has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.  Data Axle is headquartered in Dallas, TX, USA.


Roles and Responsibilities:

  • Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
  • Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
  • Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
  • Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
  • Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
  • Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
  • Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
  • Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
  • Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.


Basic Qualifications

  • 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
  • Strong proficiency in SQL and experience with BigQuery for data warehousing.
  • Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
  • Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
  • Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
  • Knowledge of data governance, security best practices, and compliance requirements in cloud environments.


Preferred Qualifications:

  • Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
  • Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
  • Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
  • Proficiency in Python, Java, or Scala for data engineering and pipeline development.
  • Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
  • Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
  • Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.
Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Remote, Hyderabad, Pune, Mohali
5 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconJava
Google Cloud Platform (GCP)
skill iconReact.js

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.

Key Responsibilities

  • Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
  • Build microservices following SOLID principles
  • Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
  • Write clean, maintainable, and efficient code
  • Perform debugging, troubleshooting, and optimization
  • Participate in code reviews and contribute to engineering best practices
  • Stay updated on security, privacy, and compliance requirements
  • Work in an Agile/Scrum environment using tools like JIRA and Confluence

Technical Skills Required

Frontend

  • Strong proficiency in JavaScript and modern ES6 features
  • Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
  • Solid understanding of Redux for state management

Backend

  • Strong hands-on experience in Java
  • Building and maintaining Microservices architectures

DevOps & Infrastructure

  • Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
  • Terraform for infrastructure as code
  • Containerization and orchestration using Docker and Kubernetes/GKE
  • Experience with IAM, security roles, service accounts

Cloud

  • Proficient with any cloud services

Database

  • Hands-on experience with PostgreSQL, MySQL, BigQuery

Scripting

  • Proficiency in Bash/Shell scripting and Python

Non-Technical Skills

  • Strong communication and interpersonal skills
  • Ability to work effectively in distributed teams across time zones
  • Quick learner and adaptable to new technologies
  • Team player with a collaborative mindset
  • Ability to explain complex technical concepts to non-technical stakeholders

Nice to Have

  • Experience with NetReveal / Detica

Why Join Us?

  • 🚀 Challenging Projects: Be part of innovative solutions making a global impact
  • 🌍 Global Exposure: Work with international teams and clients
  • 📈 Career Growth: Clear pathways for professional advancement
  • 🧘‍♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
  • 💼 Competitive Compensation: Industry-leading salary and benefits
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 

Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications  

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Jaipur, Bhopal, Gurugram
5 - 11 yrs
₹30L - ₹40L / yr
skill iconScala
Microservices
CI/CD
DevOps
skill iconAmazon Web Services (AWS)
+2 more

Dear,


We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.


📌 Job Details:

  • Role: Senior Backend Engineer
  •  Shift: 1 PM – 10 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or up to 30 days


🔹 Job Responsibilities:


✅ Design and develop scalable, reliable, and maintainable backend solutions

✅ Work on event-driven microservices architecture

✅ Implement REST APIs and optimize backend performance

✅ Collaborate with cross-functional teams to drive innovation

✅ Mentor junior and mid-level engineers


🔹 Required Skills:


✔ Backend Development: Scala (preferred), Java, Kotlin

✔ Cloud: AWS or GCP

✔ Databases: MySQL, NoSQL (Cassandra)

✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code

✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch

✔ Agile Methodologies: Scrum, Kanban


⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
SQL
skill iconJava
skill iconSpring Boot
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:


* Establish formal data practice for the organisation.

* Build & operate scalable and robust data architectures.

* Create pipelines for the self-service introduction and usage of new data.

* Implement DataOps practices

* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.

* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.

* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.

 

Who You Are:


* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.

* Experience working with public clouds like GCP/AWS.

* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.

* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.

* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.

* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc

* Good communication skills with the ability to collaborate with both technical and non-technical people.

* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

Read more
Appz global Tech Pvt Ltd
Bengaluru (Bangalore), Pune
6 - 9 yrs
₹18L - ₹25L / yr
JPA
Google Cloud Platform (GCP)
06692

 Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀


We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!


📌 Role: Senior Java Developer

📌 Experience: 6 to 9 Years

📌 Education: BE/BTech/MCA (Full-time)

📌 Location: Bangalore (Hybrid)

📌 Notice Period: Immediate Joiners Only


✅ Mandatory Skills:


🔹 Strong Core Java

🔹 Spring Boot (data flow basics)

🔹 JPA

🔹 Google Cloud Platform (GCP)

🔹 Spring Framework

🔹 Docker, Kubernetes (Good to have)

Read more
TOP MNC

TOP MNC

Agency job
via TCDC by Sheik Noor
Bengaluru (Bangalore), Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
6 - 10 yrs
₹10L - ₹21L / yr
skill iconJava
06692
Google Cloud Platform (GCP)

Java Developer with GCP

Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API, 

EXP : SA(6-10 Years)

Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata

Np : Immediate to 60 Days.


Kindly share your updated resume via WA - 91five000260seven

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
Verto

at Verto

3 recruiters
Joshua Daniel
Posted by Joshua Daniel
Pune
6 - 15 yrs
₹15L - ₹40L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+4 more

At Verto, we’re passionate about helping businesses in Africa reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Africa.

We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.

We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.

Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.

We are looking for a strong Full Stack Developer to join our team. This person will be involved in active development assignments. You are expected to have between 2 and 6 years of professional experience in any object-oriented languages and have executed considerable work in nodeJS along with any of the modern web application building libraries such as Angular along with at least a working knowledge of developing scalable distributed cloud applications on AWS or any other cloud.

We’re looking for someone who is not only a good full-stack developer but also aware of modern trends in distributed software application development. You’re smart enough to work at top companies, but you’re picky about finding the right role (this is more than just a job, right?). You’re experienced, but you also like to learn new things. And you want to work with smart people and have fun building something great.


In this role you will:

  • Design RESTful APIs
  • Work with other team members to develop and test highly scalable web applications and services as part of a suite of products in the Data governance domain working with petabyte-scale data
  • Design and create services and system architecture for your projects, and contribute and provide feedback to other team members
  • Use AWS to set up geographically agnostic systems in the cloud.
  • Exercise your strong skills & working knowledge of MySQL and relational databases
  • Prototype and develop new ideas and participate in all parts of the lifecycle from research to release
  • Work within a small team owning deliverables for our web APIs and front end.
  • Use development tools such as AWS Codebuild, git, npm, Visual Studio Code, Serverless framework, Swagger Specs, Angular, Flutter, AWS Lambda, MongoDB, MySQL, Redis, SQS, Kafka etc.
  • Design and develop dockerized applications that will be deployed flexibly either on the cloud or on-premises depending on business requirements

You’ll have:

  • 5+ years of professional development experience using any object-oriented language
  • Have developed and delivered at least one application using nodeJs
  • Experience with modern web application building libraries such as Angular, Polymer, React
  • Solid OOP and software design knowledge – you should know how to create software that’s extensible, reusable and meets desired architectural objectives
  • Excellent understanding of HTTP and REST standards
  • Experience with relational as well as MySQL databases
  • Good experience writing unit and acceptance tests
  • Proven experience in developing highly scalable distributed cloud applications on a cloud system, preferably AWS
  • You’re a great communicator and are capable of not just doing the work, but teaching others and explaining the “why” behind complicated technical decisions.
  • You aren’t afraid to roll up your sleeves: This role will evolve, and we’ll want you to evolve with it!
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Pune
5 - 10 yrs
Best in industry
MariaDB
skill iconKubernetes
MySQL
TIDB
skill iconAmazon Web Services (AWS)
+2 more
  • TIDB (Good to have)
  • Kubernetes( Must to have)
  • MySQL(Must to have)
  • Maria DB(Must to have)
  • Looking candidate who has more exposure into Reliability over maintenance
Read more
Flytbase

at Flytbase

3 recruiters
Shilpa Kumari
Posted by Shilpa Kumari
Pune
0 - 4 yrs
₹6L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Position: SDE-1 DevSecOps

Location: Pune, India

Experience Required: 0+ Years


We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.


About FlytBase


FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.


The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.

 

The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.


Role and Responsibilities:


  • Participate in the creation and maintenance of CI/CD solutions and pipelines.
  • Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
  • Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
  • Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
  • Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
  • Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
  • Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
  • Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
  • Automate routine tasks and create tools to improve team efficiency and system robustness.
  • Contribute to disaster recovery plans and ensure robust backup systems are in place.
  • Develop and enforce security policies and respond effectively to security incidents.
  • Manage incident response protocols, including on-call rotations and strategic planning.
  • Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
  • Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.


Best suited for candidates who: (Skills/Experience)


  • Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
  • Background in IT or computer science.
  • Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
  • Solid understanding of network layers and TCP/IP protocols.
  • In-depth understanding of operating systems, networking, and cloud services.
  • Strong problem-solving skills with a 'hacker' mindset.
  • Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus. 
  • Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.


Compensation: 


This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.


Perks:


  • Fast-paced Startup culture
  • Hacker mode environment
  • Enthusiastic and approachable team
  • Professional autonomy
  • Company-wide sense of purpose
  • Flexible work hours
  • Informal dress code


Read more
Avegen Health
Pune
3 - 5 yrs
₹9L - ₹15L / yr
skill iconReact Native
skill iconReact.js
skill iconJavascript
TypeScript
RESTful APIs
+3 more

Avegen is a digital healthcare company empowering individuals to take control of their health and supporting healthcare professionals in delivering life-changing care. Avegen’s core product, HealthMachine®, is a cloud-hosted, next-generation digital healthcare engine for pioneers in digital healthcare, including healthcare providers and pharmaceutical companies, to deploy high-quality robust digital care solutions efficiently and effectively. We are ISO27001, ISO13485, and Cyber Essentials certified; and compliant with the NHS Data Protection Toolkit and GDPR.


Job Summary:


Senior Software Engineer will be responsible for developing, designing, and maintaining the core framework of mobile applications for our platform. This includes tasks such as creating and implementing new features, troubleshooting and debugging any issues, optimizing the performance of the app, collaborating with cross-functional teams, and staying current with the latest advancements in React Native and mobile app development. We are looking for exceptional candidates who have an in-depth understanding of React, JavaScript, and TypeScript, can create pixel-perfect UI, and are obsessed with creating the best experiences for end users.


Your responsibilities include:


  1. Architect and build performant mobile applications on both iOS and Android platforms using React Native.
  2. Work with managers to provide technical consultation and assist in defining the scope and sizing of work.
  3. Maintain compliance with standards such as ISO 27001, ISO 13485, and Cyber Essentials that Avegen adheres to.
  4. Lead configuration of our platform HealthMachine™ in line with functional specifications and development of platform modules with a focus on quality and performance.
  5. Write well-documented, clean Javascript/TypeScript code to build reusable components in the platform.
  6. Maintain code, write automated tests, and assist DevOps in CI/CD to ensure the product is of the highest quality.
  7. Lead by example in best practices for software design and quality. You will stay current with tools and technologies to seek out the best needed for the job.
  8. Train team members on software design principles and emerging technologies by taking regular engineering workshops.


Requirements:


  1. Hands-on experience working in a product company developing consumer-facing mobile apps that are deployed and currently in use in production. He/she must have at least 3 mobile apps live in the Apple App Store/Google Play Store.
  2. Proven ability to mentor junior engineers to realize a delivery goal.
  3. Solid attention to detail, problem-solving, and analytical skills & excellent troubleshooting skills.
  4. In-depth understanding of React and its ecosystem with the latest features.
  5. Experience in writing modular, reusable custom JavaScript/TypeScript modules that scale well for high-volume applications.
  6. Strong familiarity with native development tools such as Xcode and Android Studio.
  7. A positive, “can do” attitude who isn’t afraid to lead the complex React Native implementations.
  8. Experience in building mobile apps with intensive server communication (REST APIs, GraphQL, WebSockets, etc.).
  9. Self-starter, able to work in a fast-paced, deadline-driven environment with multiple priorities.
  10. Excellent command of version control systems like Git.
  11. Working in Agile/SCRUM methodology, understanding of the application life cycle, and experience working on project management tools like Atlassian JIRA.
  12. Good command of the Unix operating system and understanding of cloud computing platforms like AWS, GCP, Azure, etc.
  13. Hands-on experience in database technologies including RDBMS and NoSQL and a firm grasp of data models and ER diagrams.
  14. Open source contributions and experience developing your own React Native wrappers for native functionality is a plus.



Qualification:

BE/BTech/MS in Information Technology, Computer Science, or a related discipline.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
Shivani Kawade
Posted by Shivani Kawade
Remote, Pune
2 - 6 yrs
₹8L - ₹25L / yr
SQL Azure
databricks
skill iconPython
SQL
ETL
+9 more

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.


We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.


We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.


Skills Required:


  • Experience in the manufacturing industry (metal industry is a plus)
  • 4+ years of experience as a Data Engineer
  • Experience in data cleaning & structuring and data manipulation
  • Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
  • ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
  • Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
  • Experience in SQL and data structures
  • Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
  • Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
  • Proficient in data management and data governance
  • Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork abilities.


Nice To Have:

  • Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
  • Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
  • Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
  • Benefits And Perks
  • A culture that fosters innovation, creativity, continuous learning, and resilience
  • Progressive leave policy promoting work-life balance
  • Mentorship opportunities with highly qualified internal resources and industry-driven programs
  • Multicultural peer groups and supportive workplace policies
  • Annual workcation program allowing you to work from various scenic locations
  • Experience the unique environment of a dynamic start-up


Why should you join TVARIT ?


Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.


If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Read more
MangoApps

at MangoApps

29 recruiters
Dhhruval Modi
Posted by Dhhruval Modi
Pune
5 - 9 yrs
₹18L - ₹35L / yr
Microsoft Windows Azure
Google Cloud Platform (GCP)
Terraform
CloudWatch
Linux/Unix

About the job


MangoApps builds enterprise products that make employees at organizations across the globe

more effective and productive in their day-to-day work. We seek tech pros, great


communicators, collaborators, and efficient team players for this role.


Job Description:


Experience: 5+yrs (Relevant experience as a SRE)


Open positions: 2


Job Responsibilities as a SRE


  • Must have very strong experience in Linux (Ubuntu) administration
  • Strong in network troubleshooting
  • Experienced in handling and diagnosing the root cause of compute and database outages
  • Strong experience required with cloud platforms, specifically Azure or GCP (proficiency in at least one is mandatory)
  • Must have very strong experience in designing, implementing, and maintaining highly available and scalable systems
  • Must have expertise in CloudWatch or similar log systems and troubleshooting using them
  • Proficiency in scripting and programming languages such as Python, Go, or Bash is essential
  • Familiarity with configuration management tools such as Ansible, Puppet, or Chef is required
  • Must possess knowledge of database/SQL optimization and performance tuning.
  • Respond promptly to and resolve incidents to minimize downtime
  • Implement and manage infrastructure using IaC tools like Terraform, Ansible, or Cloud Formation
  • Excellent problem-solving skills with a proactive approach to identifying and resolving issues are essential.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Mumbai, Pune
7 - 20 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconC#
Google Cloud Platform (GCP)
Migration

Job Title: .NET Developer with Cloud Migration Experience

Job Description:

We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.

Responsibilities:

  • Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
  • Collaborate with cross-functional teams to define, design, and ship new features
  • Participate in code reviews and ensure coding best practices are followed
  • Work closely with the infrastructure team to migrate on-premise applications to the cloud
  • Troubleshoot and debug issues that arise during migration and post-migration phases
  • Stay updated with the latest trends and technologies in .NET development and cloud computing

Requirements:

  • Bachelor's degree in Computer Science or related field
  • X+ years of experience in .NET development using C#, MVC, and ASP.NET
  • Hands-on experience with cloud migration projects, preferably with Azure or AWS
  • Strong understanding of cloud computing concepts and principles
  • Experience with database technologies such as SQL Server
  • Excellent problem-solving and communication skills

Preferred Qualifications:

  • Microsoft Azure or AWS certification
  • Experience with other cloud platforms such as Google Cloud Platform (GCP)
  • Familiarity with DevOps practices and tools


Read more
Apptware solutions LLP Pune
Pune
6 - 10 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+5 more

Company - Apptware Solutions

Location Baner Pune

Team Size - 130+


Job Description -

Cloud Engineer with 8+yrs of experience


Roles and Responsibilities


● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud

● Experience maintaining and deploying highly-available, fault-tolerant systems at scale

● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)

● Practical experience with Docker containerization and clustering (Kubernetes/ECS)

● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)

● Version control system experience (e.g. Git)

● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)

● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)

● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)

● Bachelor's or master’s degree in CS, or equivalent practical experience

● Effective communication skills

● Hands-on cloud providers like MS Azure and GC

● A sense of ownership and ability to operate independently

● Experience with Jira and one or more Agile SDLC methodologies

● Nice to Have:

○ Sensu and Graphite

○ Ruby or Java

○ Python or Groovy

○ Java Performance Analysis


Role: Cloud Engineer

Industry Type: IT-Software, Software Services

Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent

Role Category: Programming & Design

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Ignite Solutions

at Ignite Solutions

6 recruiters
Meghana Dhamale
Posted by Meghana Dhamale
Remote, Pune
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
LinkedIn
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
+2 more

We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends. 

 What will you work on?

  •  Interface with clients
  • Recommend tech stacks
  • Define end-to-end logical and cloud-native architectures
  •  Define APIs
  • Integrate with 3rd party systems
  • Create architectural solution prototypes
  • Hands-on coding, team lead, code reviews, and problem-solving

What Makes You A Great Fit?

  • 5+ years of software experience 
  • Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
  • Solid expertise and hands-on experience in Python with Flask or Django
  • Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
  • Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
  • Knowledge of DevOps practices
  • Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
  • Excellent communication skills, verbal and written

The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office. 

(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
one of the world's leading multinational investment bank

one of the world's leading multinational investment bank

Agency job
via HiyaMee by Lithin Raj
Pune
9 - 13 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
  • Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
  • Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
  • Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
  • Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
  •             Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
  • Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
  • Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
  • •             Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
  • •             Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
  • •             Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
  • Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Read more
MNC

MNC

Agency job
via Bohiyaanam Talent Solutions by Harsha Manglani
Pune
6 - 9 yrs
₹1L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We are hiring for Devops Engineer for a reputed MNC

 

Job Description:

Total exp- 6+Years

Must have:

Minimum 3-4 years hands-on experience in Kubernetes and Docker

Proficiency in AWS Cloud

Good to have Kubernetes admin certification

 

Job Responsibilities:

Responsible for managing Kubernetes cluster

Deploying infrastructure for the project

Build CICD pipeline

 

Looking for Immediate Joiners only

Location: Pune

Salary: As per market standards

Mode: Work from office



 
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
marsdevs.com
Vishvajit Pathak
Posted by Vishvajit Pathak
Remote, Pune
2 - 5 yrs
₹4L - ₹15L / yr
skill iconDjango
skill iconFlask
FastAPI
skill iconPython
skill iconDocker
+3 more

We are having an immediate requirement for a Python web developer.

 

You have:

  • At least 2 years of experience developing web applications with Django/Flask/FastAPI
  • Familiarity with Linux
  • Experience in both SQL and NoSQL databases.
  • Uses Docker and CI/CD
  • Writes tests
  • Experienced in application deployments and scaling them on AWS or GCP

 

You are:

  • Eager to work independently without being watched
  • Easy going.
  • Able to handle clients on your own

 

Location: Remote (in India)

 

Read more
Anetcorp Ind Pvt Ltd
Jyoti Yadav
Posted by Jyoti Yadav
Remote, Pune
6 - 12 yrs
₹10L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
  • Essentail Skills:
    • Docker
    • Jenkins
    • Python dependency management using conda and pip
  • Base Linux System Commands, Scripting
  • Docker Container Build & Testing
    • Common knowledge of minimizing container size and layers
    • Inspecting containers for un-used / underutilized systems
    • Multiple Linux OS support for virtual system
  • Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
  • Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
  • Jenkins PIpeline
  • Github API Understanding to trigger builds, tags, releases
  • Artifactory Experience
  • Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Read more
DREAMS Pvt Ltd
Siddhant Malani
Posted by Siddhant Malani
Pune, Bengaluru (Bangalore)
0 - 1 yrs
₹8000 - ₹12000 / mo
skill iconFlutter
DART
User Interface (UI) Design
User Experience (UX) Design
RESTful APIs
+3 more

Role Description for the 3 month internship:-

• Create multi-platform apps for iOS & Android using Google's new Flutter development framework
• Strong OO design and programming skills in DART and SDK Framework for building Android as well as iOS Apps.
• Good expertise in Auto Layout and adding constraints programmatically
• Must have experience of Memory management, caching mechanisms., Threading and Performance tuning.
• Familiarity with RESTful APIs to connect Android & iOS applications to back-end services
• Experience with third-party libraries and APIs
• Collaborate with the team of product managers, developers, to define, design, & deploy new features & functionality
• Build software that ensures the best possible usability, performance, quality, & responsiveness of features
• Work in a team following agile development practices (Scrum)
• Proficient understanding of code versioning tools such as Git, Mercurial, or SVN, and Project Management tool (JIRA)
• Utilize your knowledge of the general mobile landscape, architectures, trends, & emerging technologies
• Get Solid understanding of full mobile development life cycle and make use of the same
• Help Develop and Deploy Good Quality UI
• Solid understanding of the full mobile development life cycle.
• Good written, verbal, organizational and interpersonal skills
• Unit-test code for robustness, including edge cases, usability, and general reliability.
• Excellent debugging and optimization skills
• Strong design, development and debugging skills.

Read more
Senwell Solutions

at Senwell Solutions

1 recruiter
Trupti Gholap
Posted by Trupti Gholap
Pune
1 - 3 yrs
₹2L - ₹7L / yr
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more

We are looking to hire an experienced Sr. Angular Developer to join our dynamic team. As a lead developer, you will be responsible for creating a top-level coding base using Angular best practices. To ensure success as an angular developer, you should have extensive knowledge of theoretical software engineering, be proficient in TypeScript, JavaScript, HTML, and CSS, and have excellent project management skills. Ultimately, a top-class Angular Developer can design and build a streamlined application to company specifications that perfectly meet the needs of the user.

 

Requirements:

 

  1. Bachelor’s degree in computer science, computer engineering, or similar
  2. Previous work Experience 2+ years as an Angular developer.
  3. Proficient in CSS, HTML, and writing cross-browser compatible code
  4. Experience using JavaScript & TypeScript building tools like Gulp or Grunt.
  5. Knowledge of JavaScript MV-VM/MVC frameworks including Angluar.JS / React.
  6. Excellent project management skills.

 

Responsibilities:

 

  1. Designing and developing user interfaces using Angular best practices.
  2. Adapting interface for modern internet applications using the latest front-end technologies.
  3. Writing TypeScript, JavaScript, CSS, and HTML.
  4. Developing product analysis tasks.
  5. Making complex technical and design decisions for Angular.JS projects.
  6. Developing application codes in Angular, Node.js, and Rest Web Services.
  7. Conducting performance tests.
  8. Consulting with the design team.
  9. Ensuring high performance of applications and providing support.

 

Read more
This company provides on-demand cloud computing platforms.

This company provides on-demand cloud computing platforms.

Agency job
via New Era India by Niharica Singh
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
15 - 25 yrs
₹35L - ₹55L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
skill iconPython
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
InFoCusp

at InFoCusp

3 recruiters
Apurva Gayawal
Posted by Apurva Gayawal
Pune, Ahmedabad
3 - 7 yrs
₹7L - ₹27L / yr
skill iconJavascript
Cloud Computing
skill iconReact.js
skill iconPython
skill iconAmazon Web Services (AWS)
+4 more
InFoCusp is a company working in the broad field of Computer Science, Software Engineering,
and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in
Pune.

We have worked on / are working on Software Engineering projects that touch upon making
full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends,
platform-specific applications (Android, iOS, web applications, desktop applications), very
large scale infrastructure, cutting edge machine learning, and deep learning (AI in general).
The projects/products have wide-ranging applications in finance, healthcare, e-commerce,
legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this
is using core concepts of computer science such as distributed systems, operating systems,
computer networks, process parallelism, cloud computing, embedded systems and the
Internet of Things.

PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software
products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the
software.
● Design and administer automated testing tools and continuous integration
tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.

KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, Scala, C++
etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML,
CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout,
React). - Knowledge of automated and continuous integration testing tools (Jenkins,
Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo,
Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is
desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark)
and cloud environments (Amazon EC2, Google Compute Engine, Microsoft
Azure).
● Experience working in agile development. Knowledge and prior experience of tools
like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Hyderabad, Pune
4 - 19 yrs
₹1L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+7 more
  • Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
  • Experience in Programming Language: Java 8, Javascript
  • Experience in Microservice Development or Architecture
  • Experience with Web Application Frameworks: Spring or Springboot or Micronaut
  • Designing: High Level/Low-Level Design
  • Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
  • Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
  • Experience on one or more Database: RDBMS or NoSQL
  • Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
  • Security (Authentication, scalability, performance monitoring)
Read more
Abishar Technologies

at Abishar Technologies

1 recruiter
Chandra Goswami
Posted by Chandra Goswami
Pune
6 - 10 yrs
₹8L - ₹23L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Role and responsibilities

 

  • Expertise in AWS (Most typical Services),Docker & Kubernetes.
  • Strong Scripting knowledge, Strong Devops Automation, Good at Linux
  • Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
  • Strong understanding of AWS networking and. Strong with Security & Certificates.

Nice-to-have skills

  • Involved in Product Engineering
Read more
Leading Payment Solution Company

Leading Payment Solution Company

Agency job
via People First Consultants by Aishwarya KA
Chennai, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
9 - 16 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+9 more

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
Read more
Leading Payment Solution Company

Leading Payment Solution Company

Agency job
Remote, Bengaluru (Bangalore), Chennai, Pune, Hyderabad, Mumbai
3 - 10 yrs
₹8L - ₹28L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Experience: 3+ years of experience in Cloud Architecture

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.



Cloud Architect / Lead

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
 
Read more
Intuitive Technology Partners
Aakriti Gupta
Posted by Aakriti Gupta
Remote, Ahmedabad, Pune, Gurugram, Chennai, Bengaluru (Bangalore), india
6 - 12 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
Terraform
Linux/Unix
+10 more

Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.

Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.

Job Description :

  • Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
  • Extensive AWS/GCP Core Infrastructure skills
  • Infrastructure/ IAC Automation, Integration - Terraform
  • Kubernetes resources engineering and management
  • Experience with DevOps tools, CICD pipelines and release management
  • Good at creating documentation(runbooks, design documents, implementation plans )

Linux Experience :

  1. Namespace
  2. Virtualization
  3. Containers

 

Networking Experience

  1. Virtual networking
  2. Overlay networks
  3. Vxlans, GRE

 

Kubernetes Experience :

Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.

 

Observability                              

Experience in observability is a plus

 

Cloud automation :

Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.

 

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sayali Kachi
Posted by Sayali Kachi
Pune, Hyderabad
2 - 6 yrs
₹3L - ₹15L / yr
Google Cloud Platform (GCP)
SQL
BQ

Datametica is looking for talented Big Query engineers

 

Total Experience - 2+ yrs.

Notice Period – 0 - 30 days

Work Location – Pune, Hyderabad

 

Job Description:

  • Sound understanding of Google Cloud Platform Should have worked on Big Query, Workflow, or Composer
  • Experience in migrating to GCP and integration projects on large-scale environments ETL technical design, development, and support
  • Good SQL skills and Unix Scripting Programming experience with Python, Java, or Spark would be desirable.
  • Experience in SOA and services-based data solutions would be advantageous

 

About the Company: 

www.datametica.com

Datametica is amongst one of the world's leading Cloud and Big Data analytics companies.

Datametica was founded in 2013 and has grown at an accelerated pace within a short span of 8 years. We are providing a broad and capable set of services that encompass a vision of success, driven by innovation and value addition that helps organizations in making strategic decisions influencing business growth.

Datametica is the global leader in migrating Legacy Data Warehouses to the Cloud. Datametica moves Data Warehouses to Cloud faster, at a lower cost, and with few errors, even running in parallel with full data validation for months.

Datametica's specialized team of Data Scientists has implemented award-winning analytical models for use cases involving both unstructured and structured data.

Datametica has earned the highest level of partnership with Google, AWS, and Microsoft, which enables Datametica to deliver successful projects for clients across industry verticals at a global level, with teams deployed in the USA, EU, and APAC.

 

Recognition:

We are gratified to be recognized as a Top 10 Big Data Global Company by CIO story.

 

If it excites you, please apply.

Read more
Information Technology Services

Information Technology Services

Agency job
via Jobdost by Sathish Kumar
Pune
5 - 9 yrs
₹10L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+8 more
Preferred Education & Experience: 
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Read more
Product based company specializes into architectural product

Product based company specializes into architectural product

Agency job
via Jobdost by Sathish Kumar
Pune, Hyderabad, Gandhinagar
6 - 12 yrs
₹5L - ₹21L / yr
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
DevOps
Windows Azure
+9 more

Key Skills Required:

 

·         You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services. 

·         You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers. 

·         You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.

·         You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application

 

To be the right fit, you'll need:

 

·         Expert in Cloud Services like AWS.

·         Experience in Terraform Scripting.

·         Experience in container technology like Docker and orchestration like Kubernetes.

·         Good knowledge of frameworks such as JenkinsCI/CD pipeline, Bamboo Etc.

·         Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)

Read more
Information Technology Services

Information Technology Services

Agency job
via Jobdost by Sathish Kumar
Pune
5 - 8 yrs
₹10L - ₹30L / yr
skill iconJava
skill iconPython
skill iconJavascript
skill iconScala
skill iconDocker
+5 more
 Sr. DevOps Software Engineer:
Preferred Education & Experience:
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
Read more
MNC Company - Product Based

MNC Company - Product Based

Agency job
via Bharat Headhunters by Ranjini C. N
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
For an well reputed MNC  - as of its WFH

For an well reputed MNC - as of its WFH

Agency job
via Volibits by Hima Bindu
Bengaluru (Bangalore), Pune, Mumbai
4 - 8 yrs
₹2L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description:

 

Mandatory Skills:

Should have strong working experience with Cloud technologies like AWS and Azure.

Should have strong working experience with CI/CD tools like Jenkins and Rundeck.

Must have experience with configuration management tools like Ansible.

Must have working knowledge on tools like Terraform.

Must be good at Scripting Languages like shell scripting and python.

Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.

 

Preferable skills:

Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.

Experience in scripting languages like groovy.

Experience with GCP

 

Summary & Responsibilities:

 Write build pipelines and IaaC (ARM templates, terraform or cloud formation).

 Develop ansible playbooks to install and configure various products.

 Implement Jenkins and Rundeck jobs( and pipelines).

 Must be a self-starter and be able to work well in a fast paced, dynamic environment

 Work independently and resolve issues with minimal supervision.

 Strong desire to learn new technologies and techniques

 Strong communication (written / verbal ) skills

 

Qualification:

Bachelor's degree in Computer Science or equivalent.

4+ years of experience in DevOps and AWS.

2+ years of experience in Python, Shell scripting and Azure.

 

 

Read more
Market Intelligence Research Platform

Market Intelligence Research Platform

Agency job
via Bohiyaanam by Greta R
Pune
2 - 5 yrs
₹4L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.

You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge. 

We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.

Read more
Numerator

at Numerator

4 recruiters
Ketaki Kambale
Posted by Ketaki Kambale
Remote, Pune
5 - 10 yrs
₹10L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
 
Responsibilities
  • Selects, develops, and evaluates local personnel to ensure the efficient operation of the function

  • Leads and mentors local DevOps team members throughout the organisation..

  • Stays current with industry standards.

  • Work across engineering team to help define scope and task assignments

  • Participate in code reviews, design work and troubleshooting across business functions, multiple teams and product groups to help communicate, document and address infrastructure issues.

  • Look for innovative ways to improve observability, monitoring of large scale systems over a variety of technologies across the Numerator organization.

  • Participate in the creation of training material, helping teams embrace a culture of DevOps with self-healing and self-service ecosystems. This includes discovery, testing and integration of third party solutions in product roadmaps.

  • Lead by example and evangelize DevOps best practices within the team and within the organization and product teams in Numerator.

 

Technical Skills

  • 2+ years of experience in cloud-based systems, in a SRE or DevOps position

  • Professional and positive approach, self-motivated, strong in building relationships, team player, dynamic, creative with the ability to work on own initiatives.

  • Excellent oral and written communication skills.

  • Availability to participate in after-hours on-call support with your fellow engineers and help improve a team’s on-call process where necessary.

  • Strong analytical and problem solving mindset combined with experience of troubleshooting large-scale systems.

  • Working knowledge of networking, operating systems and packaging/build systems ie. AWS Linux, Ubuntu, PIP and NPM, Terraform, Ansible etc.

  • Strong working knowledge of Serverless and Kubernetes based environments in AWS, Azure and Google Cloud Platform (GCP).

  • Experience in managing highly redundant data stores, file systems and services both in the cloud and on-premise including both data transfer, redundancy and cost-management.

  • Ability to quickly stand up AWS or other cloud-based platform services in isolation or within product environments to test out a variety of solutions or concepts before developing production-ready solutions with the product teams.

  • Bachelors or, Masters in Science, or Post Doctorate in Computer Science or related field, or equivalent work experience.

Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
Searce Inc

at Searce Inc

64 recruiters
Yashodatta Deshapnde
Posted by Yashodatta Deshapnde
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
+2 more
Role & Responsibilities :
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team

Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.

Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Read more
Yojito Software Private Limited
Tushar Khairnar
Posted by Tushar Khairnar
Pune
1 - 4 yrs
₹4L - ₹8L / yr
DevOps
skill iconDocker
skill iconKubernetes
skill iconPython
SQL
+4 more

We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.

 

You will be responsible for

  • Leading the DevOps strategy and development of SAAS Product Deployments
  • Leading and mentoring other computer programmers.
  • Evaluating student work and providing guidance in the online courses in programming and cloud computing.

 

Desired experience/skills

Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.

 

Skills:

  • Strong programming skills in Python, SQL,
  • Cloud Computing

 

Experience:

2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.

 

Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.

 

Soft Skills:

  • Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
  • A strong understanding of algorithms and data structures, and their performance characteristics.
  • Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
  • Detail oriented and well organized.   
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort