Cutshort logo
Google Cloud Platform (GCP) Jobs in Delhi, NCR and Gurgaon

33+ Google Cloud Platform (GCP) Jobs in Delhi, NCR and Gurgaon | Google Cloud Platform (GCP) Job openings in Delhi, NCR and Gurgaon

Apply to 33+ Google Cloud Platform (GCP) Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
VoerEir India

at VoerEir India

2 recruiters
Pooja Jaiswal
Posted by Pooja Jaiswal
Noida
3 - 5 yrs
₹13L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask
Linux/Unix
Computer Networking
+3 more

Roles and Responsibilities

• Ability to create solution prototype and conduct proof of concept of new tools.

• Work in research and understanding of new tools and areas.

• Clearly articulate pros and cons of various technologies/platforms and perform

detailed analysis of business problems and technical environments to derive a

solution.

• Optimisation of the application for maximum speed and scalability.

• Work on feature development and bug fixing.

Technical skills

• Must have knowledge of the networking in Linux, and basics of computer networks in

general.

• Must have intermediate/advanced knowledge of one programming language,

preferably Python.

• Must have experience of writing shell scripts and configuration files.

• Should be proficient in bash.

• Should have excellent Linux administration capabilities.

• Working experience of SCM. Git is preferred.

• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.

• Understanding of Architecture of OpenStack/Kubernetes is a plus.

• Code contributed to OpenStack/Kubernetes community will be a plus.

• Data Center network troubleshooting will be a plus.

• Understanding of NFV and SDN domain will be a plus.

Soft skills

• Excellent verbal and written communications skills.

• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility

• Strong customer focus - Decent Networking and relationship management

• Flair for creativity and innovation

• Strategic thinking This is an individual contributor role and will need client interaction on technical side.


Must have Skill - Linux, Networking, Python, Cloud

Additional Skills-OpenStack, Kubernetes, Shell, Java, Development


Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Nayan Technologies
Agency job
via OptimHire by Ajdevi Kindo
South Extention II, Delhi
4 - 9 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description


BUDGET: 20 LPA (MAX)

What you will do - Key Responsibilities


  • DevOps architect will be responsible for testing, QC, debugging support, all of the various Server Side and Java software/servers for various products developed or procured by the company, will debug problems with integration of all software, on-field deployment issues and suggest improvements/work-arounds("hacks") and structured solutions/approaches.
  • Responsible for Scaling the architecture towards 10M+ users.
  • Will work closely with other team members including other Web Developers, Software Developers, Application Engineers, product managers to test and deploy existing products for various specialists and personnel using the software.
  • Will act in capacity of Team Lead as necessary to coordinate and organize individual effort towards a successful completion / demo of an application.
  • Will be solely responsible for the application approval before demo to clients, sponsors and investors.


Essential Requirements


  • Should understand the ins and outs of Docker and Kubernetes
  • Can architect complex cloud-based solutions using multiple products on either AWS or GCP
  • Should have a solid understanding of cryptography and secure communication
  • Know your way around Unix systems and can write complex shell scripts comfortably
  • Should have a solid understanding of Processes and Thread Scheduling at the OS level
  • Skilled with Ruby, Python or similar scripting languages
  • Experienced with installing and managing multiple GPUs spread across multiple machines
  • Should have at least 5 years managing large server deployments

Category

DevOps Engineer (IT & Networking)

Expertise

DevOps - 3 Years - Intermediate Python - 2 Years AWS - 3 Years - Intermediate Docker - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate 

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
LambdaTest

at LambdaTest

3 recruiters
Himanshi Tomer
Posted by Himanshi Tomer
Noida
1 - 5 yrs
₹6L - ₹25L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

Description

Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.


What You'll Do


 Troubleshooting and analyzing technical issues raised by internal and external users.

 Working with Monitoring tools like Prometheus / Nagios / Zabbix.

 Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.

 Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.

 Working closely with the cross-functional teams to resolve issues.

 Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.

 Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.


What you should have


 Bachelor’s or Master’s degree in computer science or any related field.

 3 - 6 years of experience in Linux / Unix, cloud computing techniques.

 Familiar with working on cloud and datacenter for enterprise customers.

 Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.

 Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.

 Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.

 Must know how to choose the best tools and technologies which best fit the business needs.

 Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.

 Excellent organizational skills to adapt to a constantly changing technical environment

Read more
Shiprocket

at Shiprocket

5 recruiters
Kailuni Lanah
Posted by Kailuni Lanah
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 6 yrs
₹14L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

We are seeking an experienced DevOps Engineer across product lines.

Key Responsibilities

• Deploying, automating, maintaining and managing AWS cloud-based

production system, to ensure the availability, performance, scalability

and security of production systems.

• Build, release and configuration management of production systems.

• System troubleshooting and problem solving across platform and

application domains.

• Suggesting architecture improvements, recommending process

improvements.

• Ensuring critical system security through the use of best in class cloud

security solutions.

THE POSITION - DevOps Engineer

Qualification • BTech from premier institutes

Work Experience

• AWS: 1-3 years’ experience with using a broad range of AWS technologies (e.g.

EC2, RDS, ELB, EBD, S3, VPC, Glacier, IAM, CloudWatch, KMS, Lambda) to

develop and maintain an Amazon AWS based cloud solution, with an emphasis

on best practice cloud security

Skill Set

• DevOps: Solid experience as a DevOps Engineer in a 24x7 uptime Amazon

AWS environment, including automation experience with configuration

management tools.

• Scripting Skills: Strong scripting (e.g. Python, shell scripting) and

automation skills.

• Monitoring Tools: Experience with system monitoring tools (e.g. Nagios).

• Problem Solving: Ability to analyze and resolve complex infrastructure

resource and application deployment issues.

• DB skills: Basic DB administration experience (RDS, MongoDB), experience

in setting up and managing AWS Aurora databases.

• ELK: Proficient in ELK setup

• GitHub: Experienced in maintain and administering of GitHub


• Accountable for proper backup and disaster recovery procedures.

• Experience with Puppet, Chef, Ansible, or Salt

• Professional commitment to high quality, and a passion for learning new

skills.

• Detail-oriented individual with the ability to rapidly learn new concepts

and technologies.

• Strong problem-solving skills, including providing simple solutions to

complex situations.

• Must be a strong team player with the ability to communicate and

collaborate effectively in a geographically disperse working environment.

Read more
Voiceoc

at Voiceoc

1 video
1 recruiter
Aman Dhankhar
Posted by Aman Dhankhar
Noida
1 - 2 yrs
₹7.5L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

The candidate must have 2-3 years of experience in the domain. The responsibilities include:

● Deploying system on Linux-based environment using Docker

● Manage & maintain the production environment

● Deploy updates and fixes

● Provide Level 1 technical support

● Build tools to reduce occurrences of errors and improve customer experience

● Develop software to integrate with internal back-end systems

● Perform root cause analysis for production errors

● Investigate and resolve technical issues

● Develop scripts to automate visualization

● Design procedures for system troubleshooting and maintenance

● Experience working on Linux-based infrastructure

● Excellent understanding of MERN Stack, Docker & Nginx (Good to have Node Js)

● Configuration and managing databases such as Mongo

● Excellent troubleshooting

● Experience of working with AWS/Azure/GCP

● Working knowledge of various tools, open-source technologies, and cloud services

● Awareness of critical concepts in DevOps and Agile principles

● Experience of CI/CD Pipeline

Read more
Classplus

at Classplus

1 video
4 recruiters
Peoples Office
Posted by Peoples Office
Noida
5 - 8 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+11 more
About us

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

What will you do?

Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

Create standardized tooling and templates for development teams to create CI/CD pipelines

Ensure infrastructure is created and maintained using terraform

Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

You should apply, if you

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

Being Part of the Clan

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Read more
Classplus

at Classplus

1 video
4 recruiters
Peoples Office
Posted by Peoples Office
Noida
8 - 10 yrs
₹35L - ₹55L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+16 more

About us

 

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.

 

Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

 

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

 

 

What will you do?

 

· Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

 

· Create standardized tooling and templates for development teams to create CI/CD pipelines

 

· Ensure infrastructure is created and maintained using terraform

 

· Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

 

· Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

 

· Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

 

 

You should apply, if you

 

 

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

 

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

 

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

 

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

 

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

 

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

 

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

 

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

 

 

Being Part of the Clan

 

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

 

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

 

Are you a go-getter with the chops to nail what you do? Then this is the place for you.

Read more
Remote only
3 - 8 yrs
₹5L - ₹20L / yr
skill iconReact.js
skill iconGitHub
Mobile applications
skill iconJavascript
GraphQL
+16 more
Roles & Responsibilities:
• Make technical and product decisions based on the roadmap autonomously
• Contribute to team and organizational improvements in process and infrastructure
• Build new features, bug fix and suggest projects that will improve productivity and
infrastructure
• Code, test and operate React.js-based services
• Debug production issues across services and multiple levels of the stack
Requirements
• Education: Engineering Graduate/ Postgraduate in Computer Science
• Experience: 2-5 Years of Experience as a Front end Developer and at least 2 years of
React.js experience
Must to Have
• 2+ years of UI development
• 2+ years of React.js development
• Knowledge of GraphQL and TypeScript
• Experience with Node.js
• Sass, Git, Linux
• Knowledge of Microservice architecture
• Knowledge of Docker
• Experience with AWS, Azure or Google Cloud
• Good understanding of MySQL/MariaDB
• Understanding of Non-relational Databases MongoDB, Redis
• Insightful experience in various web design and technologies HTML, CSS, Ajax and
jQuery.
• Must have Analytical and Debugging Skill
Read more
Leverage Edu

at Leverage Edu

1 recruiter
Agency job
via MNM FILS SOLUTIONS by JAISRIRAM NEELAMEGAN
Noida
1 - 6 yrs
₹17L - ₹20L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
MERN Stack
+5 more

Responsibilities:

  • Developing high-performance applications by writing testable, reusable, and efficient code.
  • Implementing effective security protocols, data protection measures, and storage solutions.
  • Recommending and implementing improvements to processes and technologies.
  • Designing customer-facing UI and back-end services for various business processes.

 

Requirements:

  • 1-8 years of appropriate technical experience
  • Proven experience as a Full Stack Developer or similar role preferably using MEANMERN stack.
  • Good understanding of client-side and server-side architecture
  • Excellent interpersonal skills and professional approach
  • Ability to work in a dynamic, fast moving and growing environment
  • Be determined and willing to learn new technologies
  • Immediate joiners are preferred

 

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Hyderabad, Pune
4 - 19 yrs
₹1L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+7 more
  • Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
  • Experience in Programming Language: Java 8, Javascript
  • Experience in Microservice Development or Architecture
  • Experience with Web Application Frameworks: Spring or Springboot or Micronaut
  • Designing: High Level/Low-Level Design
  • Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
  • Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
  • Experience on one or more Database: RDBMS or NoSQL
  • Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
  • Security (Authentication, scalability, performance monitoring)
Read more
Quess Corp Limited

at Quess Corp Limited

6 recruiters
Anjali Singh
Posted by Anjali Singh
Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹1L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
Big Data
Data processing
Data Visualization

GCP  Data Analyst profile must have below skills sets :

 

Read more
APT Portfolio

at APT Portfolio

1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
Top startup of India -  News App
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
skill iconPython
Hadoop
Apache Spark
skill iconMongoDB
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
VECROS TECHNOLOGIES PRIVATE LIMITED

at VECROS TECHNOLOGIES PRIVATE LIMITED

1 video
4 recruiters
BESTA PREM
Posted by BESTA PREM
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
1 - 3 yrs
₹5L - ₹8L / yr
skill iconReact.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconNodeJS (Node.js)
skill iconMongoDB
+6 more

Requirements:

  • Bachelor's or master’s degree in computer science or equivalent work experience.
  • A solid understanding of the full web technology stack and experience in developing web applications
  • Strong experience with React JS framework for building visually appealing interfaces and back end frameworks like Django
  • Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery)
  • Good knowledge of object-oriented programming and data structures in Python
  • Experience with NoSQL databases, web server technologies, designing, and developing APIs.
  • Strong knowledge and work experience in AWS cloud services
  • Proficiency with Git, CI/CD pipelines.
  • Knowledge in Agile/Scrum processes.
  • Experience in Docker container usage.

 

Roles and responsibilities:

  • Develop sophisticated web applications for drone control, data management, security, and data protection.
  • Build scalable features using advanced framework concepts such as Microservices, Queues, Jobs, Events, Task Scheduling, etc.
  • Integrate with Third-Party APIs/services
  • Use theoretical knowledge and/or work experience to find innovative solutions to the problems at hand.
  • Collaborate with team members to ideate solutions.
  • Troubleshoot, debug, and upgrade existing software.
  • Passionate to learn and adapt in a start-up environment.

 

Perks:

  • Hands-on experience with state of the art facilities we use for robot development.
  • Opportunity to work with industry experts and researchers in the field of AI, Computer Vision, and Machine Learning.
  • Competitive salary.
  • Stock options.
  • Opportunity to be an early part of the team and grow with the startup.
  • Freedom of working schedule.
  • Opportunity to kickstart and lead your own projects.
Read more
Dhwani Rural Information Systems
Sunandan Madan
Posted by Sunandan Madan
gurgaon
2 - 6 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more
Job Overview
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.

Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
 The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.

  
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.

EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Read more
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
1 - 10 yrs
₹5L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
Linux/Unix
SQL Azure
+9 more

Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Read more
MNC Company - Product Based
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Neurosensum

at Neurosensum

5 recruiters
Tanuj Diwan
Posted by Tanuj Diwan
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 3 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time. 

SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations. 

Day to day responsibilities include:

  1. Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
  2. Work on Linux/Unix OS and Multi tech application patching
  3. Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
  4. Create and modify scripts or applications to perform tasks
  5. Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
  6. Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it. 
  7. Managing release of the sprint. 
  8. Educating team of the best practices.
  9. Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
  10. Implementing cost effective measure on cloud and minimizing existing costs.

Skills and prerequisites

  1. OOPS knowledge
  2. Problem solving nature
  3. Willing to do the R&D
  4. Works with the team and support their queries patiently 
  5. Bringing new things on the table - staying updated 
  6. Pushing solution above a problem. 
  7. Willing to learn and experiment
  8. Techie at heart
  9. Git basics
  10. Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
  11. Basic Linux handling 
  12. Docker and orchestration (Great to have)
  13. Scripting – python (preferably)/bash
Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
Remote, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Mumbai
2 - 8 yrs
₹9L - ₹15L / yr
DevOps
skill iconKubernetes
Windows Azure
Google Cloud Platform (GCP)
skill iconJenkins
+11 more
Tools you’ll need to succeed this role are as follows:
● Good experience with Continuous integration and deployment tools like
Jenkins, Spinnaker, etc.
● Ability to understand problems and craft maintainable solutions.
● Working cross-functionally with a broad set of business partners to understand
and integrate their API or data flow systems with Xeno, so a minimal
understanding of data and API integration is a must.
● Experience with docker and microservice based architecture using
orchestration platforms like Kubernetes.
● Understanding of Public Cloud, We use Azure and Google Cloud.
● Familiarity with web servers like Apache, nginx, etc.
● Possessing knowledge of monitoring tools such as Prometheus, Grafana, New
Relic, etc.
● Scripting in languages like Python, Golang, etc is required.
● Some knowledge of database technologies like MYSQL and Postgres is
required.
● Understanding Linux, specifically Ubuntu.
● Bonus points for knowledge and best practices related to security.
● Knowledge of Java or NodeJS would be a significant advantage.


Initially, when you join some of the projects you’d get to own are:
● Audit and improve overall security of the Infrastructure.
● Setting up different environments for different sets of teams like
QA,Development, Business etc.
Read more
A leading Edtech company
Agency job
via Jobdost by Sathish Kumar
Noida
5 - 8 yrs
₹12L - ₹17L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+4 more
  • Minimum 3+ yrs of Experience in DevOps with AWS Platform
  •     • Strong AWS knowledge and experience
  •     • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
  •     • Experience with IAC tools  Terraform
  •     • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
  •     • Significant experience with Linux operating system environments
  •     • Experience with infrastructure scripting solutions such as Python/Shell scripting
  •     • Must have experience in designing Infrastructure automation framework.
  •     • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
  •     • Excellent problem-solving, Log Analysis and troubleshooting skills
  •     • Experience in setting up centralized logging for system (EKS, EC2) and application
  •     • Process-oriented with great documentation skills
  •     • Ability to work effectively within a team and with minimal supervision
Read more
Searce Inc

at Searce Inc

64 recruiters
Yashodatta Deshapnde
Posted by Yashodatta Deshapnde
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
+2 more
Role & Responsibilities :
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team

Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.

Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Read more
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 10 yrs
₹18L - ₹22L / yr
DevOps
Linux/Unix
Shell Scripting
Google Cloud Platform (GCP)
Databases
+4 more
This is a great opportunity for young and ambitious people looking to make a career in IT infrastructure solution sales!!

Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.

With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
 
As a Systems Engineer/DevOps Engineer, you will work in a proactive environment, ensuring availability and reliability of existing systems, deploying changes, and acting as a technical point of escalation for user incidents.
 
What you will do:
  • Providing on-call support within a high availability production environment
  • Logging issues
  • Providing Complex problem analysis and resolution for technical and application issues
  • Supporting and collaborating with team members
  • Running system updates
  • Monitoring and responding to system alerts
  • Developing and running system health checks
  • Applying industry standard practices across the technology estate
  • Performing system reviews
  • Reviewing and maintaining infrastructure configuration
  • Diagnosing performance issues and network bottlenecks
  • Collaborating within geographically distributed teams
  • Supporting software development infrastructure by continuous integration and delivery standards
  • Working closely with developers and QA teams as part of a customer support centre
  • Projecting delivery work, either individually or in conjunction with other teams, external suppliers or contractors
  • Ensuring maintenance of the technical environments to meet current standards
  • Ensuring compliance with appropriate industry and security regulations
  • Providing support to Development and Customer Support teams
  • Managing the hosted infrastructure through vendor engagement
  • Managing 3rd party software licensing ensuring compliance
  • Delivering new technologies as agreed by the business

 


Candidate Profile:

What you need to have:

  • Experience working within a technical operations environment relevant to associated skills stated.
  • Be proficient in:
  1. Linux, zsh/ bash/ similar
  2. ssh, tmux/ screen/ similar
  3. vim/ emacs/ similar
  4. Computer networking
  • Have a reasonable working knowledge of:
  1. Cloud infrastructure, Preferably GCP
  2. One or more programming/ scripting languages
  3. Git
  4. Docker
  5. Web services and web servers
  6. Databases, relational and NoSQL
  • Some familiarity with:
  1. Puppet, ansible
  2. Terraform
  3. GitHub, CircleCI , Kubernetes
  4. Scripting language- Shell
  5. Databases: Cassandra, Postgres, MySQL or CloudSQL
  6. Agile working practices including scrum and Kanban
  7. Private & public cloud hosting environments
  • Strong technology interests with a positive ‘can do’ attitude
  • Be flexible and adaptable to changing priorities
  • Be good at planning and organising their own time and able to meet targets and deadlines without supervision
  • Excellent written and verbal communication skills.
  • Approachable with both colleagues and team members
  • Be resourceful and practical with an ability to respond positively and quickly to technical and business challenges
  • Be persuasive, articulate and influential, but down to earth and friendly with own team and colleagues
  • Have an ability to establish relationships quickly and to work effectively either as part of a team or singularly
  • Be customer focused with both internal and external customers
  • Be capable of remaining calm under pressure
  • Technically minded with good problem resolution skills and systematic manner
  • Excellent documentation skills
  • Prepared to participate in out of hours support rota

 

Read more
Aureus Tech Systems

at Aureus Tech Systems

3 recruiters
Krishna Kanth
Posted by Krishna Kanth
Hyderabad, Bengaluru (Bangalore), Chennai, Visakhapatnam, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 14 yrs
₹18L - ₹25L / yr
skill icon.NET
skill iconC#
ASP.NET
Web API
LINQ
+3 more

Title : .Net Developer with Cloud 

Locations: Hyderabad, Chennai, Bangalore, Pune and new Delhi (Remote).

Job Type: Full Time


.Net Job Description:

Required experience on below skills:

Azure experienced (Mandatory)
.Net programming (Mandatory)
DevSecOps capabilities (Desired)
Scripting skills (Desired)
Docker (Desired)
Data lake management (Desired)
  . Minimum of 5+ years application development experience 

. Experience with MS Azure: App Service, Functions, Cosmos DB and Active Directory

· Deep understanding of C#, .NET Core, ASP.NET Web API 2, MVC

· Experience with MS SQL Server

· Strong understanding of object-oriented programming

· Experience working in an Agile environment.

· Strong understanding of code versioning tools such as Git or Subversion

· Usage of automated build and/or unit testing and continuous integration systems

· Excellent communication, presentation, influencing, and reasoning skills.

· Capable of building relationships with colleagues and key individuals.

. Must have capability of learning new technologies.

Edited
Read more
StarClinch

at StarClinch

1 video
9 recruiters
Misbah From
Posted by Misbah From
NCR (Delhi | Gurgaon | Noida)
3 - 5 yrs
₹4.8L - ₹8.4L / yr
skill iconPython
skill iconDjango
skill iconHTML/CSS
JScript
Bitbucket
+6 more
  • Back-end development using Python/Django
  • Front-end development using CSS, HTML and JS
  • Write reusable, testable, and efficient code
  • Implement security and data protection
  • Use Amazon Relational Database Service
  • Commit, push, pull and sync to Bitbucket, GitLab
  • Deployment of code on MS Azure and AWS
  • Build efficient scripts and cron jobs in GCP
  • Connect apps and automate workflows using Integromat
Requirements
  • 3+ years of Professional Full time experience building and maintaining complex software on a cross-functional team. You'll join us in writing clean, maintainable software that solves hard problems. You'll write testable, quality code. You'll push the team and the mission forward with your contributions.
  • Python and Django
  • Strong database skills
  • Basic systems administration
  • Bachelors or Masters in Computer Science Engineering (or equivalent)
  • Minimum product dev experience of 3+ years in web/mobile startups with expertise in designing and implementing high performance web applications.
  • You're an incessant problem solver and tougher the problem gets, more fun you have.
  • You love to own end to end responsibility, starting from defining the problem statement (either yourself or alongside your peers), development (PoC if needed), testing, releasing in staging & then production env and finally monitoring.
  • Sound working knowledge of HTML, CSS and JS is an add-on
  • Technical know-how of MS Azure, AWS and GCP are desirable
  • Understand and keep the technical documentation up-to-date on Confluence
  • Collaborate work using bug tracking and project management tools like Jira, Redmine
Read more
Tatsam

at Tatsam

1 recruiter
Yash Pal Mittal
Posted by Yash Pal Mittal
Remote, NCR (Delhi | Gurgaon | Noida)
2 - 4 yrs
₹10L - ₹15L / yr
skill iconJava
Spring
skill iconGit
DevOps
SOLID
+4 more

Responsibilities:

  • Your primary responsibility as a senior backend engineer will be to architect and develop a scalable and robust micro-services backend with strong Java, Spring(Boot), SQL, AWS/GCP.
  • Experience being part of a software development team in an Agile/Lean/Continuous Delivery environment
  • Be a key performer in a high-performance product engineering team

Qualifications:

  • 2 to 4 years of overall IT experience. Most of this experience in Java (Core Java, Spring boot, Java collections, Java Multithreading)
  • Should have experience designing database schemas - SQL and NoSQL.
  • Exposure to frameworks like Spring, Hibernate, Play would be a plus
  • Experience with microservices architecture would be beneficial.
  • Working knowledge of any public cloud (AWS, GCP or Azure)
  • Broad understanding and experience of real-time analytics, NoSQL data stores, data modeling and data management, analytical tools, languages, or libraries
  • Knowledge of container tech like Docker, Kubernetes would be a plus.
  • Bachelor's Degree in Computer Science or Engineering.
Read more
Coredgeio

at Coredgeio

1 recruiter
Abhimanyu Bhatter
Posted by Abhimanyu Bhatter
Remote, Noida, Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
6 - 11 yrs
₹16L - ₹25L / yr
Reliability engineering
skill iconDocker
skill iconKubernetes
DevOps
Site reliability
+6 more
What are we looking for:
● Research, propose and evaluate with a 5-year vision, the architecture, design, technologies,
processes and profiles related to Telco Cloud.
● Participate in the creation of a realistic technical-strategic roadmap of the network to transform
it to Telco Cloud and be prepared for 5G.
● Using your deep technical expertise, you will provide detailed feedback to Product Management
and Engineering, as well as contribute directly to the platform code base to enhance both the
Customer experience of the service, as well as the SRE quality of life.
● The individual must be aware of trends in network infrastructure as well as within the network
engineering and OSS community. What technologies are being developed or launched?
● The individual should stay current with infrastructure trends in the telco network cloud domain.
● Be responsible for the Engineering of Lab and Production Telco Cloud environments, including
patches, upgrades, and reliability and performance improvements.
Required Minimum Qualifications: (Education and Technical Skills/Knowledge)
● Software Engineering degree, MS in Computer Science or equivalent experience
● Years of experiences as an SRE, DevOps, Development and/or Support related role
● 0-5 years of professional experience for a junior position
● At least 8 years of professional experience for a senior position
● Unix server administration and tuning : Linux / RedHat / CentOS / Ubuntu
● You have deep knowledge in Networking Layers 1-4
● Cloud / Virtualization (at least two): Helm, Docker, Kubernetes, AWS, Azure, Google Cloud,
OpenStack, OpenShift, VMware vSphere / Tanzu
● You have in-depth knowledge of cloud storage solutions on top of AWS, GCP, Azure and/or
on-prem private cloud, such as Ceph, CephFS, GlusterFS
● DevOps: Jenkins, Git, Azure DevOps, Ansible, Terraform
● Backend Knowledge Bash, Python, Go (other knowledge of Scripting Language is a plus).
● PaaS Level solutions such as Keycloak for IAM, Prometheus, Grafana, ELK, DBaaS (such as MySQL,
Cassandra)
About the Organisation:
The team at Coredge.io is a combination of experienced and young professionals alike having
many years of experience in working with Edge computing, Telecom application development
and Kubernetes. The company has continuously collaborated with the open source community,
universities and major industry players in furthering its goal of providing the industry with an
indispensable tool to offer improved services to its customers. Coredge.io has a global market
presence with its offices in US and New Delhi, India.
Read more
Remote, Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
5 - 15 yrs
₹18L - ₹33L / yr
DevOps
skill iconKubernetes
skill iconDocker
Ansible
Google Cloud Platform (GCP)
+1 more
Tech Lead : Infrastructure DevOps with Azure / GCP experience (Cloud Engineering)

We are hiring for a Lead DevOps Engineer in Cloud domain with hands on experience in Azure / GCP.

- Expertise in managing Cloud / VMWare resources and good exposure on Dockers/Kubernetes

- Working knowledge of operating systems( Unix, Linux, IBM AIX)

- Experience in installation, configuration and managing apache webserver, Tomcat/Jboss

- Good understanding of JVM, troubleshooting and performance tuning through thread dump and log analysis

-Strong expertise in Dev Ops tools:

- Deployment (Chef/Puppet/Ansible /Nebula/Nolio)

- SCM (TFS, GIT, ClearCase)

- Build tools (Ant,Maven, Make, Gradle)

- Artifact repositories (Nexes, JFrog ArtiFactory)

- CI tools (Jenkins, TeamCity),

- Experienced in scripting languages: Python, Ant, Bash and Shell

What will be required of you?

- Responsible for implementation and support of application/web server infrastructure for complex business applications

- Server configuration management, release management, deployments, automation & troubleshooting

- Set-up and configure Development, Staging, UAT and Production server environment for projects and install/configure all dependencies using the industry best practices

- Manage Code Repositories

- Manage, Document, Control and Innovate Development and Release procedure.

- Configure automated deployment on multiple environment

- Hands-on working experience of Azure or GCP.

- Knowledge Transfer the implementation to support team and until such time support any production issues
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort