Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
Apptware solutions LLP Pune
Pune
6 - 10 yrs
₹9L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+5 more

Company - Apptware Solutions

Location Baner Pune

Team Size - 130+


Job Description -

Cloud Engineer with 8+yrs of experience


Roles and Responsibilities


● Have 8+ years of strong experience in deployment, management and maintenance of large systems on-premise or cloud

● Experience maintaining and deploying highly-available, fault-tolerant systems at scale

● A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)

● Practical experience with Docker containerization and clustering (Kubernetes/ECS)

● Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, VPN)

● Version control system experience (e.g. Git)

● Experience implementing CI/CD (e.g. Jenkins, TravisCI, CodePipeline)

● Operational (e.g. HA/Backups) NoSQL experience (e.g. MongoDB, Redis) SQL experience (e.g. MySQL)

● Experience with configuration management tools (e.g. Ansible, Chef) ● Experience with infrastructure-as-code (e.g. Terraform, Cloudformation)

● Bachelor's or master’s degree in CS, or equivalent practical experience

● Effective communication skills

● Hands-on cloud providers like MS Azure and GC

● A sense of ownership and ability to operate independently

● Experience with Jira and one or more Agile SDLC methodologies

● Nice to Have:

○ Sensu and Graphite

○ Ruby or Java

○ Python or Groovy

○ Java Performance Analysis


Role: Cloud Engineer

Industry Type: IT-Software, Software Services

Functional Area: IT Software - Application Programming, Maintenance Employment Type: Full Time, Permanent

Role Category: Programming & Design

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Fountane inc
Remote only
1 - 2 yrs
₹6L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

JOB TITLE: DevOps Engineer

 

LOCATION: Remote/Hybrid.

 

A LITTLE BIT ABOUT THE ROLE:


As a DevOps Engineer with 1-2 years of experience, you will play a crucial role in ensuring the seamless development, deployment, and maintenance of software applications. Your responsibilities will include automating processes, managing source control, collaborating with project teams, and contributing to the continuous improvement of our development and deployment pipelines.

 

WHAT YOU WILL BE DOING:

  • Develop and maintain automation scripts to facilitate application testing and deployment.
  • Manage source control systems to track changes in automation scripts and code.
  • Collaborate with project teams to understand test objectives and provide input on testability of requirements.
  • Design and implement automation test strategies and testing frameworks.
  • Create and execute test cases, writing automation scripts to validate software functionality.
  • Work closely with project teams to gather automation requirements and troubleshoot issues.
  • Contribute to the setup and enhancement of continuous integration (CI) environments.
  • Participate in project meetings to define and agree on automation testing approaches.
  • Perform manual testing when required.


WHAT YOU WILL NEED TO BE GREAT IN THIS ROLE:

  • 1-2 years of experience as a DevOps Engineer.
  • Familiarity with CI/CD processes and tools such as Jenkins, Git, or equivalent.
  • Experience in Automated API Testing, Performance Testing, and Security Testing.
  • Strong problem-solving skills and attention to detail.
  • ISTQB Certificate or equivalent certification.


SOFT SKILLS:

  • Collaboration - Ability to work in teams across the world
  • Adaptability - situations are unexpected and need to be quick to adapt
  • Open-mindedness - Expect to see things outside the ordinary


LIFE AT FOUNTANE:

  • Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
  • Competitive pay
  • Health insurance for spouses, kids, and parents.
  • PF/ESI or equivalent
  • Individual/team bonuses
  • Employee stock ownership plan
  • Fun/challenging variety of projects/industries
  • Flexible workplace policy - remote/physical
  • Flat organization - no micromanagement
  • Individual contribution - set your deadlines
  • Above all - culture that helps you grow exponentially!


Qualifications - No bachelor's degree required. Good communication skills are a must!


A LITTLE BIT ABOUT THE COMPANY:

Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.


We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.



Read more
Techify Solutions Pvt Ltd
Zankhan Kukadiya
Posted by Zankhan Kukadiya
Pune, Ahmedabad
5 - 8 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Job Summary: 

  • we are actively seeking a seasoned DevOps Engineer boasting a minimum of 5 years' of overall experience which includes 3 years' of hands-on working experience with AWS or Azure Cloud, Kubernetes, Terraform, Docker, Jenkins and either Python or BASH scripting. Ideal candidates will demonstrate a track record of proficiently executing projects utilizing these tools, with a keen ability to articulate their specific contributions and methodologies applied.
  • This role mandates on-site presence at our Ahmedabad/Pune office. We offer competitive compensation commensurate with expertise and value contribution, with no salary constraints.
  • Should be able to work in 24 X 7 shifts for support of infrastructure, Weekly rotational shifts.


Technical Skill

  • Experience working on AWS or Azure
  • Experience working with Docker and Kubernetes Orchestration and EKS
  • Exp. with infra-automation using Terraform, CloudFormation and Ansible
  • Exp. working on Linux environment with at least one scripting language.
  • Exp. of CI/CD pipeline using Jenkins, Harness , ArgoCD
  • Exp. in Application Performance Monitoring tools such as Instana, Grafana, Splunk, PagerDuty, Pingdom and Cloud Watch.
  • Should have experience with source control and management tools like Git.
Read more
Verto

at Verto

3 recruiters
Akin Okunola
Posted by Akin Okunola
Pune
2 - 6 yrs
₹6L - ₹20L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+3 more

What we need 

We are looking for a strong Full Stack Developer to join our team. This person will be involved in active development assignments. You are expected to have between 2 and 6 years of professional experience in any object oriented languages and have executed considerable work in NodeJS along with any of the modern web application building libraries such as Angular, Polymer and/or React along with at least a working knowledge of developing scalable distributed cloud applications on Azure, AWS or on-premises using Kubernetes or Apache Mesos. 

 

Responsibilities 

- Design RESTful APIs

- Design and create services and system architecture for your projects, and contribute and provide feedback to other team members

- Experience with cloud services (ideally AWS). You know the “gotchas,” potential problems, and how to setup a geographically redundant service in the cloud

- Experience with different databases, including strong working knowledge of MySQL and relational databases

- This person will be responsible for working with other team members to develop and test highly scalable web applications and services as part of a suite of products in the Data governance domain working with petabyte scale data

- Prototype and develop new ideas and participate in all parts of the lifecycle from research to release

- Be comfortable working within a small team owning deliverables for our web APIs and front end.

- Comfortable with current development tools such as Jenkins, Git, bower, npm etc.

- Design and develop dockerized applications which will be deployed flexibly either on the cloud or on-premises depending on business requirements 

 

Who we think will be a great fit...  

We’re looking for someone who is not only a good full stack developer but also aware of modern trends in distributed software application development. You’re smart enough to work at top companies, but you’re picky about finding the right role (this is more than just a job, right?). You’re experienced, but you also like to learn new things. And you want to work with smart people and have fun building something great. 

 

You also meet most (if not more) of the following requirements: 

- 2+ years of professional development experience using any object oriented language

- Have developed and delivered at least one application using nodeJs

- Experience with modern web application building libraries such as Angular, Polymer, React etc - You find satisfaction in a job well done and want to solve head-scratching challenges

- Solid OOP and software design knowledge – you should know how to create software that’s extensible, reusable and meets desired architectural objectives

- Excellent understanding of HTTP and REST standards

- Experience with relational as well as MySQL databases

- Good experience writing unit and acceptance tests

- Proven experience in developing highly scalable distributed cloud applications on a cloud system, preferably AWS

- You’re a great communicator and are capable of not just doing the work, but teaching others and explaining the “why” behind complicated technical decisions.

- You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll want you to evolve with it!

Read more
mazosol
kirthick murali
Posted by kirthick murali
Mumbai
10 - 20 yrs
₹30L - ₹58L / yr
skill iconPython
skill iconR Programming
PySpark
Google Cloud Platform (GCP)
SQL Azure

Data Scientist – Program Embedded 

Job Description:   

We are seeking a highly skilled and motivated senior data scientist to support a big data program. The successful candidate will play a pivotal role in supporting multiple projects in this program covering traditional tasks from revenue management, demand forecasting, improving customer experience to testing/using new tools/platforms such as Copilot Fabric for different purpose. The expected candidate would have deep expertise in machine learning methodology and applications. And he/she should have completed multiple large scale data science projects (full cycle from ideation to BAU). Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. This is a data science role directly embedded into the program/projects, stake holder management and collaborations with patterner are crucial to the success on this role (on top of the deep expertise). 

What we are looking for: 

  1. Highly efficient in Python/Pyspark/R. 
  2. Understand MLOps concepts, working experience in product industrialization (from Data Science point of view). Experience in building product for live deployment, and continuous development and continuous integration. 
  3. Familiar with cloud platforms such as Azure, GCP, and the data management systems on such platform. Familiar with Databricks and product deployment on Databricks. 
  4. Experience in ML projects involving techniques: Regression, Time Series, Clustering, Classification, Dimension Reduction, Anomaly detection with traditional ML approaches and DL approaches. 
  5. Solid background in statistics, probability distributions, A/B testing validation, univariate/multivariate analysis, hypothesis test for different purpose, data augmentation etc. 
  6. Familiar with designing testing framework for different modelling practice/projects based on business needs. 
  7. Exposure to Gen AI tools and enthusiastic about experimenting and have new ideas on what can be done. 
  8. If they have improved an internal company process using an AI tool, that would be great (e.g. process simplification, manual task automation, auto emails) 
  9. Ideally, 10+ years of experience, and have been on independent business facing roles. 
  10. CPG or retail as a data scientist would be nice, but not number one priority, especially for those who have navigated through multiple industries. 
  11. Being proactive and collaborative would be essential. 

 

Some projects examples within the program: 

  1. Test new tools/platforms such as Copilo, Fabric for commercial reporting. Testing, validation and build trust. 
  2. Building algorithms for predicting trend in category, consumptions to support dashboards. 
  3. Revenue Growth Management, create/understand the algorithms behind the tools (can be built by 3rd parties) we need to maintain or choose to improve. Able to prioritize and build product roadmap. Able to design new solutions and articulate/quantify the limitation of the solutions. 
  4. Demand forecasting, create localized forecasts to improve in store availability. Proper model monitoring for early detection of potential issues in the forecast focusing particularly on improving the end user experience. 


Read more
Mumbai
5 - 10 yrs
₹8L - ₹20L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more


Data Scientist – Delivery & New Frontiers Manager 

Job Description:   

We are seeking highly skilled and motivated data scientist to join our Data Science team. The successful candidate will play a pivotal role in our data-driven initiatives and be responsible for designing, developing, and deploying data science solutions that drives business values for stakeholders. This role involves mapping business problems to a formal data science solution, working with wide range of structured and unstructured data, architecture design, creating sophisticated models, setting up operations for the data science product with the support from MLOps team and facilitating business workshops. In a nutshell, this person will represent data science and provide expertise in the full project cycle. Expectation of the successful candidate will be above that of a typical data scientist. Beyond technical expertise, problem solving in complex set-up will be key to the success for this role. 

Responsibilities: 

  • Collaborate with cross-functional teams, including software engineers, product managers, and business stakeholders, to understand business needs and identify data science opportunities. 
  • Map complex business problems to data science problem, design data science solution using GCP/Azure Databricks platform. 
  • Collect, clean, and preprocess large datasets from various internal and external sources.  
  • Streamlining data science process working with Data Engineering, and Technology teams. 
  • Managing multiple analytics projects within a Function to deliver end-to-end data science solutions, creation of insights and identify patterns.  
  • Develop and maintain data pipelines and infrastructure to support the data science projects 
  • Communicate findings and recommendations to stakeholders through data visualizations and presentations. 
  • Stay up to date with the latest data science trends and technologies, specifically for GCP companies 

 

Education / Certifications:  

Bachelor’s or Master’s in Computer Science, Engineering, Computational Statistics, Mathematics. 

Job specific requirements:  

  • Brings 5+ years of deep data science experience 

∙       Strong knowledge of machine learning and statistical modeling techniques in a in a clouds-based environment such as GCP, Azure, Amazon 

  • Experience with programming languages such as Python, R, Spark 
  • Experience with data visualization tools such as Tableau, Power BI, and D3.js 
  • Strong understanding of data structures, algorithms, and software design principles 
  • Experience with GCP platforms and services such as Big Query, Cloud ML Engine, and Cloud Storage 
  • Experience in configuring and setting up the version control on Code, Data, and Machine Learning Models using GitHub. 
  • Self-driven, be able to work with cross-functional teams in a fast-paced environment, adaptability to the changing business needs. 
  • Strong analytical and problem-solving skills 
  • Excellent verbal and written communication skills 
  • Working knowledge with application architecture, data security and compliance team. 


Read more
admedia
Ashita John
Posted by Ashita John
Remote only
6 - 16 yrs
₹15L - ₹25L / yr
skill iconJava
Hibernate (Java)
Spring
Apache
MySQL
+7 more

Responsibilities:

 Develop and maintain high-quality, scalable, and efficient Java codebase for our ad-serving platform.

 Collaborate with cross-functional teams including product managers, designers, and other developers to

understand requirements and translate them into technical solutions.

 Design and implement new features and functionalities in the ad-serving system, focusing on performance

optimization and reliability.

 Troubleshoot and debug complex issues in the ad server environment, providing timely resolutions to ensure

uninterrupted service.

 Conduct code reviews, provide constructive feedback, and enforce coding best practices to maintain code quality

and consistency across the platform.

 Stay updated with emerging technologies and industry trends in ad serving and digital advertising, and integrate

relevant innovations into our platform.

 Work closely with DevOps and infrastructure teams to deploy and maintain the ad-serving platform in a cloud- based environment.

 Collaborate with stakeholders to gather requirements, define technical specifications, and estimate development

efforts for new projects and features.

 Mentor junior developers, sharing knowledge and best practices to foster a culture of continuous learning and

improvement within the development team.

 Participate in on-call rotations and provide support for production issues as needed, ensuring maximum uptime

and reliability of the ad-serving platform.

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
3 - 5 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more

About DeepIntent:

DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioural, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences and message them on a one-to-one basis in a privacy-compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.


What You’ll Do:

  • Establish formal data practice for the organisation.
  • Build & operate scalable and robust data architectures.
  • Create pipelines for the self-service introduction and usage of new data
  • Implement DataOps practices
  • Design, Develop, and operate Data Pipelines which support Data scientists and machine learning
  • Engineers.
  • Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy
  • to deploy and manage.
  • Collaborate with various business stakeholders, software engineers, machine learning
  • engineers, and analysts.

Who You Are:

  • Experience in designing, developing and operating configurable Data pipelines serving high
  • volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, data architecture, Agile and
  • DevOps methodologies.
  • Experience building Data architectures that optimize performance and cost, whether the
  • components are prepackaged or homegrown
  • Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow
  • etc. and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with the ability to collaborate with both technical and non-technical
  • people.
  • Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

 

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Astra Security

at Astra Security

1 video
3 recruiters
Human Resources
Posted by Human Resources
Remote only
2 - 4 yrs
₹10L - ₹19L / yr
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
RESTful APIs
SaaS
+12 more

About us

Astra is a cyber security SaaS company that makes otherwise chaotic pentests a breeze with its one of a kind Pentest Platform. Astra's continuous vulnerability scanner emulates hacker behavior to scan applications for 8300+ security tests. CTOs & CISOs love Astra because it helps them fix vulnerabilities in record time and move from DevOps to DevSecOps with Astra's CI/CD integrations.


Astra is loved by 650+ companies across the globe. In 2023 Astra uncovered 2 million+ vulnerabilities for its customers, saving customers $69M+ in potential losses due to security vulnerabilities. 


We've been awarded by the President of France Mr. François Hollande at the La French Tech program and Prime Minister of India Shri Narendra Modi at the Global Conference on Cyber Security. Loom, MamaEarth, Muthoot Finance, Canara Robeco, ScripBox etc. are a few of Astra’s customers.


Role Overview

As an SDE 2 Back-end Engineer at Astra, you will play a crucial role in the development of a new vulnerability scanner from scratch. You will be architecting & engineering a scalable technical solution from the ground-up.

You will have the opportunity to work alongside talented individuals, collaborating to deliver innovative solutions and pushing the boundaries of what's possible in vulnerability scanning. The role requires deep collaboration with the founders, product, engineering & security teams.

Join our team and contribute to the development of a cutting-edge SaaS security platform, where high-quality engineering and continuous learning are at the core of everything we do.


Roles & Responsibilities:


  • You will be joining our Vulnerability Scanner team which builds a security engine to identify vulnerabilities in technical infrastructure.
  • You will be the technical product owner of the scanner, which would involve managing a lean team of backend engineers to ensure smooth implementation of the technical product roadmap.
  • Research about security vulnerabilities, CVEs, and zero-days affecting cloud/web/API infrastructure.
  • Work in an agile environment of engineers to architect, design, develop and build our microservice infrastructure.
  • You will research, design, code, troubleshoot and support (on-call). What you create is also what you own.
  • Writing secure, high quality, modular, testable & well documented code for features outlined in every sprint.
  • Design and implement APIs in support of other services with a highly scalable, flexible, and secure backend using GoLang
  • Hands-on experience with creating production-ready code & optimizing it by identifying and correcting bottlenecks.
  • Driving strict code review standards among the team.
  • Ensuring timely delivery of the features/products
  • Working with product managers to ensure product delivery status is transparent & the end product always looks like how it was imagined
  • Work closely with Security & Product teams in writing vulnerability detection rules, APIs etc.


Required Qualifications & Skills: 


  • Strong 2-4 years relevant development experience in GoLang
  • Experience in building a technical product from idea to production.
  • Design and build highly scalable and maintainable systems in Golang
  • Expertise in Goroutines and Channels to write efficient code utilizing multi-core CPU optimally
  • Must have hands-on experience with managing AWS/Google Cloud infrastructure
  • Hands on experience in creating low latency high throughput REST APIs
  • Write test suites and maintain code coverage above 80%
  • Working knowledge of PostgreSQL, Redis, Kafka
  • Good to have experience in Docker, Kubernetes, Kafka
  • Good understanding of Data Structures, Algorithms and Operating Systems.
  • Understanding of cloud/web security concepts would be an added advantage


What We Offer:


  • Adrenalin rush of being a part of a fast-growing company
  • Fully remote & agile working environment
  • A wholesome opportunity in a fast-paced environment where you get to build things from scratch, improve and influence product design decisions
  • Holistic understanding of SaaS and enterprise security business
  • Opportunity to engage and collaborate with developers globally
  • Experience with security side of things
  • Annual trips to beaches or mountains (last one was Chikmangaluru)
  • Open and supportive culture 
Read more
FindingPi Inc

at FindingPi Inc

1 recruiter
Mrinmayee Bandopadhyay
Posted by Mrinmayee Bandopadhyay
ShivajiNager
4 - 6 yrs
₹6L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

About the role:

 

We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.

 

Responsibilities

 

  • Collaborate with development and operations teams to implement continuous integration and deployment processes.
  • Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
  • Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
  • Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
  • Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
  • Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
  • Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
  • Must have experience of any one of the programming language (Java, .Net, Python )
  • Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
  • Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
  • Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
  • ontribute to backend development projects, ensuring robust and scalable solutions.
  • Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
  • Design and implement database schemas.
  • Identify and implement opportunities for performance optimization and scalability of backend systems.
  • Participate in code reviews, architectural discussions, and sprint planning sessions.
  • Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
  •  
  • Mentor junior team members and provide guidance and training on best practices in DevOps.

 

 

Required Qualifications

  • BS/MS in Computer Science, Engineering, or a related field
  • 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
  • Strong understanding of CI/CD principles and practices.
  • Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
  • Experience with infrastructure automation tools like Terraform or Ansible.
  • Proficient in scripting languages like PowerShell or Python.
  • Experience with Linux and Windows server administration.
  • Strong understanding of backend development principles and technologies.
  • Excellent communication and collaboration skills.
  • Ability to work independently and as part of a team.
  • Problem-solving and analytical skills.
  • Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
  • Excellent problem-solving, critical thinking, and communication skills.
  • Have worked in a product based company.

 

What we offer:

  • Competitive salary and benefits package
  • Opportunity for growth and advancement within the company
  • Collaborative, dynamic, and fun work environment
  • Possibility to work with cutting-edge technologies and innovative projects


Read more
Tekdi Technologies Pvt. Ltd.
Tekdi Recruitment
Posted by Tekdi Recruitment
Pune
5 - 11 yrs
₹7L - ₹15L / yr
skill iconAmazon Web Services (AWS)
CI/CD
SQL Azure
Google Cloud Platform (GCP)
DevOps

Key Responsibilities:

  1. Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
  2. Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
  3. Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
  4. Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
  5. Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
  6. Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
  7. Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.


Required Skills and Experience:

  • Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
  • Strong background in deploying and maintaining Django-based and MERN stack web applications.
  • Expertise in automation tools and scripting languages.
  • Solid understanding of network architecture and security protocols.
  • Experience with continuous integration and deployment (CI/CD) methodologies.
  • Excellent problem-solving abilities and a proactive approach to system optimization.
  • Good communication skills for effective collaboration with various teams.


Desired Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • Relevant certifications in AWS, GCP, or Azure are highly desirable.
  • Several years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.


Read more
HighLevel Inc.
Remote only
3 - 7 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconVue.js
skill iconMongoDB
Google Cloud Platform (GCP)
+3 more

About HighLevel

HighLevel is an all-in-one, white-label marketing platform for agencies & consultants. Our goal as a business is to create a sustainable, powerful, “all things marketing” operating system that creates limitless opportunities for our customers. With over 20,000 customers, we need people like YOU to help us grow and scale even further in the coming years. We currently have 600 employees worldwide, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and, above all, encourage a healthy work-life balance for our employees wherever they call home.


OurWebsite- https://www.gohighlevel.com/

YouTubeChannel- https://www.youtube.com/channel/UCXFiV4qDX5ipEDQcsm1j4g


Role & Responsibilities

Weare seeking a highly skilled Full Stack Developer to join our CRM Conversation/ support AI chatbot team. The ideal candidate will have a strong background in Node.js and Vue.js and possess hands-on experience in various technologies and concepts. Responsible for implementing visual elements that users see and interact within a web application.

● Collaborate with cross-functional teams to design, develop, and maintain CRM applications and features.

● Build and optimize user interfaces using Vue.js for an exceptional user experience.

● Develop server-side logic and APIs using Node.js.

● Implement robust data storage and retrieval solutions with a focus on Elastic Search, Data Indexing, Database Sharding, and Autoscaling.

● Integrate Message Queues, Pub-sub systems, and Event-Based architectures to enable real-time data processing and event-driven workflows.

● Handlereal-time data migration and event processing tasks efficiently. ● Utilize messaging systems such as Active MQ, Rabbit MQ, and Kafka to manage data flow and communication within the CRM ecosystem.

● Collaborate closely with front-end and back-end developers, product managers, and data engineers to deliver high-quality solutions.

● Optimize applications for maximum speed and scalability.

● Ensure the security and integrity of data and application systems.

● Troubleshoot and resolve technical issues, bugs, and performance bottlenecks.

● Stayupdated with emerging technologies and industry trends, and make recommendations for adoption when appropriate.

● Participate in code reviews, maintain documentation, and contribute to a culture of continuous improvement.

● Provide technical support and mentorship to junior developers when necessary.


Qualifications

● Hands-on experience with Node.Js and Vue.js (or React/Angular)

● Strong understanding of Elastic Search, Data Indexing, Database Sharding, and Autoscaling techniques.

● Experience working with Message Queues, Pub-sub patterns, and Event-Based architecture.

● Proficiency in Real-time Data Migration and Real-time Event Processing.

● Familiarity with messaging systems like Active MQ, Rabbit MQ, and Kafka.

● Bachelor's degree or equivalent experience in Engineering or a related field of study

● Expertise with MongoDB

● Proficient understanding of code versioning tools, such as Git

● Strong communication and problem-solving skills


What to Expect when you Apply

● Exploratory Call

● Technical Round I/II

● Assignment

● Cultural Fitment Round


Perks and Benefits

● Impact- Work with scale, our infrastructure handles around 5 Billion+ API hits, 2 Billion+ message events, and more than 40 TeraBytes of data

● Compensation- Best in Industry

● Learning- Work with a team of A-players distributed across 15 countries who move fast (we have built one of the widest products on the market in under 3 years)

● Generous Device Policy- You get a Macbook Pro

● Unlimited Leave Policy

● 1teamoffsite every year

● Remotefirst culture


EEOStatement: At HighLevel, we value diversity. In fact, we understand it makes our organization stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Read more
ToXSL Technologies Pvt Ltd

at ToXSL Technologies Pvt Ltd

1 video
1 recruiter
Parul Kapoor
Posted by Parul Kapoor
Mohali, Chandigarh
2 - 4 yrs
₹4L - ₹8L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Requirements :

  •   Good knowledge of Linux Ubuntu.
  •   Knowledge of general networking practices/protocols / administrative tasks.
  •   Adding, removing, or updating user account information, resetting passwords, etc.
  •   Scripting to ensure operations automation and data gathering is accomplished seamlessly.
  •   Ability to work cooperatively with software developers, testers, and database administrators.
  •   Experience with software version control system (GIT) and CI.
  •   Knowledge of Web server Apache, Nginx etc
  •    E-Mail servers based on Postfix and Dovecot.
  •   Understanding of docker and Kubernetes Containers.
  •    IT hardware, Linux Server, System Admin, Server administrator

 

Highlights:

  • Working 5 days a week.
  • Group Health Insurance for our employees.
  • Work with a team of 300+ excellent engineers.
  • Extra Compensation for Night Shifts.
  • Additional Salary for an extra day spent in the office.
  • Lunch buffets for all our employees.
  • Fantastic Friday meals to dine in for employees.
  • Yearly and quarterly awards with CASH amount, Birthday celebration, Dinner coupons etc.
  • Team Dinners on Project Completion.
  • Festival celebration, Month End celebration.


Read more
Arting Digital
Pragati Bhardwaj
Posted by Pragati Bhardwaj
Gurugram
3 - 10 yrs
₹12L - ₹14L / yr
B2B Marketing
Sales
Cloud Computing
end to end sales
Google Cloud Platform (GCP)
+2 more

Job Title-  Cloud Sales Specialist


CTC- 12-14Lpa

Location-  Gurgaon

Experience-  6+Years

Working Mode- Work from Office

Critical Skills- Good communication skills,Cloud Sales, B2B sales

Qualification-  Any Engineering/Computer degree



The Cloud Sales Specialist is responsible for achieving revenue targets and ensuring on-time collections for the assigned cloud products/services (for example ,Azure, AWS, GCP) in the respective location(s). The role holder is responsible for the effective management of the sales funnel, execution of marketing activities, and coordination of channel partner enablement initiatives for the assigned product/services. Building and maintaining strong professional relationships with vendor and channel partner representatives is critical to the role.


Responsibilities:


  • Responsible for achieving revenue targets (quarterly, annual) through effective sales funnel management for the assigned products/services in the respective location(s)
  • Be responsible for on-time collections from channel partners and execution of marketing activities for the assigned products/services
  • Build and maintain relationships with vendor representatives and channel partners for the assigned products/services
  • Responsible for MIS, reports generation, documentation, and compliance for sales, collection, and channel enablement activities, as per guidelines



Requirements:


  • Must have experience in Cloud Sales and B2B Sales.
  • Field Sales would be an added advantage.
  • Experience of around 3 to 5 years in the sales function in IT Distribution, preferably in cloud solutions
  • Should possess an understanding of the sales, distribution, and channel management process for cloud solutions -
  • Good Product knowledge of Cloud Solution offerings
  • Cloud sales certification with one or more vendors is mandatory (for example, Amazon, Oracle, GCP)
  • Should possess excellent interpersonal and communication skills
  • Should be able to build strong relations with key stakeholders


Read more
Chennai, Coimbatore
6 - 10 yrs
₹10L - ₹25L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconAmazon Web Services (AWS)
+2 more

Role & responsibilities

  • Senior Java developer with 6 to 10 years of experience having worked on Java, SpringBoot, Hibernate, Microservices, Redis, AWS S3
  • Contribute to all stages of the software development lifecycle
  • Design, implement, and maintain Java-based applications that can be high-volume and low-latency
  • Analyze user requirements to define business objectives
  • Envisioning system features and functionality
  • Define application objectives and functionality
  • Ensure application designs conform to business goals
  • Develop and test software
  • Should have good experience in Code Review
  • Expecting to be 100% hands-on while working with the clients directly
  • Performing requirement analysis
  • Developing high-quality and detailed designs
  • Conducting unit testing using automated unit test frameworks
  • Identifying risk and conducting mitigation action planning
  • Reviewing the work of other developers and providing feedback
  • Using coding standards and best practices to ensure quality
  • Communicating with customers to resolve issues
  • Good Communication Skills 


Read more
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
UpSolve Solutions LLP
Shaurya Kuchhal
Posted by Shaurya Kuchhal
Mumbai
2 - 4 yrs
₹7L - ₹11L / yr
skill iconMachine Learning (ML)
skill iconData Science
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconPython
+3 more

About UpSolve

Work on cutting-edge tech stack. Build innovative solutions. Computer Vision, NLP, Video Analytics and IOT.


Job Role

  • Ideate use cases to include recent tech releases.
  • Discuss business plans and assist teams in aligning with dynamic KPIs.
  • Design solution architecture from input to infrastructure and services used to data store.


Job Requirements

  • Working knowledge about Azure Cognitive Services.
  • Project Experience in building AI solutions like Chatbots, sentiment analysis, Image Classification, etc.
  • Quick Learner and Problem Solver.


Job Qualifications

  • Work Experience: 2 years +
  • Education: Computer Science/IT Engineer
  • Location: Mumbai
Read more
Cision

at Cision

1 recruiter
Rajesh N
Posted by Rajesh N
Remote only
5 - 10 yrs
₹10L - ₹15L / yr
UCS
skill iconAmazon Web Services (AWS)
Cisco Certified Network Associate (CCNA)
Google Cloud Platform (GCP)
Meraki

Responsibilities:

  • Design, configure, administer, monitor and troubleshoot complex enterprise network environment for communications and solution improvements within the Company's high-traffic, business-critical Internet production environment. 
  • Develop and implement plans for infrastructure upgrades, modeling, analysis, and planning.
  • Analyze, design, test, and evaluate network systems including local area networks (LAN), wide area networks (SD-WAN), Cloud, Data Centers, and other data communications systems. 
  • Design, manage, and maintain office wired and wireless networks.
  • Work with complex network and storage environments including SAN, clustering, load balancing, multi-site VPNs, network IDS, and firewalls.
  • Monitor network performance and provide security measures, troubleshooting and maintenance, as needed. 
  • Acquire, maintain and report on data related to network performance management, capacity planning and utilization trend analysis.
  • Provide after-hours support for systems and applications.
  • Collaborate with other functional IT areas to ensure that IT best practices, policies, and procedures are adhered to. Provide technical support for staff issues.
  • 7+ years of experience in setup, configuration and maintenance of networking infrastructure including Cisco switches, routers, firewalls, VPN technologies, SD-WAN, and other WAN administration.


Experience

  • 7+ years of experience with configuring and supporting IP networks.
  • 5+ years of experience with Google Cloud Platform (GCP) and multi-region Amazon Web Services (AWS) networking configurations
  • 3+ years of experience in managing Meraki wireless networks and equipment in office environments.
  • Strong fundamental knowledge of networks, ports, protocols, and infrastructure setup
  • Enterprise-Level: Larger than 1,000 PCs and 500 Servers (multi-platform) are a definite plus
  • Solid understanding of Microsoft Active Directory, DNS, DHCP, etc. in a large multi-site environment.
  • In-depth knowledge of IT standards, concepts, best practices, and procedures - relies on knowledge, experience and judgment to plan and accomplish goals.
  • Excellent verbal and written communication skills for purposes of communicating with team members as well as other departments.
  • Ability to take initiative & manage multiple detailed tasks in a fast-paced and ever-changing environment.
  • Organizational, and project management abilities, strong analytical and problem-solving skills


Education Qualifications:

  • Bachelor's Degree: CIS Computer Information Systems; Information Technology, Computer Science or similar.
  • CCNP Certification or similar
  • Cisco switches, routers, firewalls, VPN technologies, SD-WAN, WAN administration
  • Cisco SAN experience
  • OS Platforms: Windows / Linux
  • Hardware Platforms: Cisco UCS, Cisco Meraki, EMC storage



Read more
Unico Connect Private Limited
Mumbai
3 - 4 yrs
₹5L - ₹16L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

We are seeking a highly skilled and motivated DevOps Engineer to join our team. As a DevOps Engineer at Unico Connect, you will play a crucial role in enhancing our software development and deployment processes. The ideal candidate will have hands-on experience with AWS or GCP. 


Responsibilities

  1. Create and manage CI/CD pipelines using Jenkins, GitHub Actions, or GitLabEE pipelines.
  2. implement continuous deployment using CD tools such as GIT, Jenkins, Argo CD.
  3. Utilize Docker for containerization and deployment.
  4. Manage and optimize ports for efficient communication within the software architecture.
  5. Implement and manage API gateways, Load Balancers to facilitate secure and efficient communication between services.
  6. Configure and maintain Web Application Firewalls (WAF) to protect applications from security threats.
  7. Implement robust security policies to safeguard our applications and infrastructure.
  8. Deploy code to servers, ensuring a seamless and reliable release process.
  9. Collaborate with development and operations teams to streamline processes.
  10. Assist in capacity planning and scaling of infrastructure as needed.
  11. Identify, tackle, and mitigate system failures and security vulnerabilities.
  12. Prepare, deploy, Manage and monitor Docker Containers (React, NodeJS applications) and store them.
  13. Implement DevOps, GitOps, SecOps practices.
  14. Experience with monitoring and logging frameworks such as DataDog, Grafana, Prometheus, etc.


Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • Proven experience as a DevOps Engineer, with expertise in AWS or GCP.
  • Solid understanding of port management, security policies, code deployment, and CI/CD pipelines.
  • Hands-on experience with implementation of API gateway, Load balancer and Web Application Firewalls (WAF).
  • Experience with container technologies such as Docker and container orchestration platforms such as Kubernetes.
  • Strong understanding of Linux administration and shell scripting.
  • Proficiency in code versioning tools like Git. 
  • Experience with monitoring tools such as Datadog and ELK.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication skills.
  • Ability to work individually in a fast-paced environment.
  • Skills: DevOps, S3, Docker, route53, Kubernetes, AWS/GCP, PM2, Netlify, Domain handling, API Gateway, Load-balancer, Terraform, Jenkins, CI/CD, Networking, VPC, Security, ELK, Python, Shell/Bash, Disaster recovery, Monitoring, Grafana, Prometheus, EKS


Nice to Have

  • Previous exposure to architectures based on microservices.
  • Cloud Certification(s) or equivalents like Solutions Architect, DevOps Engineer, SysOps Admin.
  • Knowledge of Redis, Kafka, RabbitMQ.
  • Experience in implementing and designing cloud-native security concepts, DevSecOps.


Read more
VoerEir India

at VoerEir India

2 recruiters
Pooja Jaiswal
Posted by Pooja Jaiswal
Noida
3 - 5 yrs
₹13L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask
Linux/Unix
Computer Networking
+3 more

Roles and Responsibilities

• Ability to create solution prototype and conduct proof of concept of new tools.

• Work in research and understanding of new tools and areas.

• Clearly articulate pros and cons of various technologies/platforms and perform

detailed analysis of business problems and technical environments to derive a

solution.

• Optimisation of the application for maximum speed and scalability.

• Work on feature development and bug fixing.

Technical skills

• Must have knowledge of the networking in Linux, and basics of computer networks in

general.

• Must have intermediate/advanced knowledge of one programming language,

preferably Python.

• Must have experience of writing shell scripts and configuration files.

• Should be proficient in bash.

• Should have excellent Linux administration capabilities.

• Working experience of SCM. Git is preferred.

• Knowledge of build and CI-CD tools, like Jenkins, Bamboo etc is a plus.

• Understanding of Architecture of OpenStack/Kubernetes is a plus.

• Code contributed to OpenStack/Kubernetes community will be a plus.

• Data Center network troubleshooting will be a plus.

• Understanding of NFV and SDN domain will be a plus.

Soft skills

• Excellent verbal and written communications skills.

• Highly driven, positive attitude, team player, self-learning, self motivating and flexibility

• Strong customer focus - Decent Networking and relationship management

• Flair for creativity and innovation

• Strategic thinking This is an individual contributor role and will need client interaction on technical side.


Must have Skill - Linux, Networking, Python, Cloud

Additional Skills-OpenStack, Kubernetes, Shell, Java, Development


Read more
Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Bengaluru (Bangalore)
5 - 11 yrs
Best in industry
skill iconJavascript
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconJava
skill iconReact.js
+3 more

About DataGrokr:

DataGrokr (www.datagrokr.com) is a cloud-native technology consulting organization providing the next generation of data management, cloud and enterprise solutions. We solve complex technology problems for our global clients who rely on us for our deep technical knowledge and delivery excellence.

If you are unafraid of technology, believe in your learning ability and are looking to work amongst smart, driven colleagues whom you can look up to and learn from, you might want to check us out.


About the Role:

(Job location – Bangalore)

Job Overview:

We are seeking a highly skilled and experienced Lead Full Stack Developer to join our team. The ideal candidate will be responsible for building complex applications and leading a full-stack development team. He/she will work closely with our development team and stakeholders to develop innovative solutions and drive technical excellence to build high-quality, scalable, and responsive web applications.


Roles and Responsibilities:

• Lead and manage a team of full-stack developers.

• Design and implement complex applications and architecture using modern software development practices.

• Collaborate with product managers, designers, and other stakeholders to understand project requirements and develop technical solutions that meet business needs.

• Provide technical leadership and mentorship to the development team, and conduct code reviews to ensure code quality and maintainability.

• Ensure that the codebase is scalable, maintainable, and of high quality.

• Optimize application performance and user experience.

• Stay up to date with the latest trends and best practices in both Frontend and Backend development and recommend new tools and technologies to improve the development process.

• Define and enforce coding standards, development methodologies, and best practices Desired Candidate Profile:

• Bachelor's degree in Computer Science & Engineering or a related field.

• At least 7 years of experience in Full Stack development

• Minimum of 5 years in a Lead/Architect role.

• Experience in designing and implementing complex applications and architecture.

• Minimum 4 years of experience in at least one Cloud Technology like AWS, GCP and Azure.

• Experience in Web Application Frameworks like Angular or React for frontend and NodeJS, Flask or Django for Backend.

• Strong knowledge of JavaScript, HTML, CSS, and other web technologies.

• Strong knowledge of any one OO Programming Language like Python, Java and C#.

• Experience with state management libraries such as Redux, MobX or Zustand.

• Solid understanding of Security Best Practices.

• Good understanding of Testing – Unit, Integrated, Regression and E2E.

• Exposure to micro-services / serverless architecture is a plus

• Exposure to CICD tools like Azure DevOps, Jenkins, Gitlab CICD, etc.

• Write well-designed, testable, efficient code by using best software development practices.

• Active contribution to Open-Source Communities and Libraries. Benefits:

• You will work in an open culture that promotes commitment over compliance, individual responsibility over rules and bringing out the best in everyone.

• You will be actively encouraged to attain certifications, lead technical workshops and conduct meetups to grow your own technology acumen and personal brand.

• You will be groomed and mentored by senior leaders to take on positions of increased responsibility.


If you are a passionate and skilled Full Stack Developer with leadership experience and want to be part of a young, innovative and competent team, we encourage you to apply.

Read more
Fintrac Global services
Hyderabad
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
Bash
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Required Qualifications: 

∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience. 

∙5+ years of experience in a DevOps role, preferably for a SaaS or software company. 

∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP). 

∙Proficiency in scripting languages (e.g., Python, Bash, Ruby). 

∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI). 

∙Extensive experience with NGINX and similar web servers. 

∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). 

∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation). 

∙Ability to work on-call as needed and respond to emergencies in a timely manner. 

∙Experience with high transactional e-commerce platforms.


Preferred Qualifications: 

∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer, 

Azure DevOps Engineer Expert). 

∙Experience in a high availability, 24x7x365 environment. 

∙Strong collaboration, communication, and interpersonal skills. 

∙Ability to work independently and as part of a team.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Remote only
5 - 8 yrs
₹10L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconFlask
Google Cloud Platform (GCP)
API
+2 more

Senior Python Developer (6 + Years Experience)


Core Skills:


  • Strong Python experience and understanding of modern design patterns, abstractions, object oriented programming at scale.
  • Strong understanding of event based/async architectures (Kafka, WebSockets)
  • Database interactions
  • Working understanding of infrastructure as code (terraform ) and how develop code that will be deployed by IAC.
  • Working understanding of Kubernetes including containerization, deploying, debugging services running on k8s.


Preferred:


  • Working GCP experience including GKE, GCE, Cloud functions/cloud run, IAM.
  • Git required
  • CICD and automation experience; stack includes Jenkins, Terraform, Argo, Harness.


Read more
HighLevel Inc.

at HighLevel Inc.

1 video
31 recruiters
Monica Sankule
Posted by Monica Sankule
Remote only
5 - 7 yrs
₹20L - ₹30L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more

HighLevel is an all-in-one, white-label marketing platform for agencies & consultants. Our goal as a business is to create a sustainable, powerful, “all things marketing” operating system that creates limitless opportunities for our customers. With over 30,000 customers, we need people like YOU to help us grow and scale even further in the coming years.


We currently have 700+ employees worldwide, working remotely as well as in our headquarters, which is located in Dallas, Texas.


Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and, above all, encourage a healthy work-life balance for our employees wherever they call home!


About the role

We are looking for an experienced software engineer with strong technical and communication skills who has worked extensively on frontend and backend engineering systems that process large amounts of data at scale and manage services that handle thousands of requests every minute.


Currently, we have millions of sales funnels, websites, attributions, forms and survey tools for lead generation. Our B2B customers use these tools to bring in the leads to the HighLevel CRM system. We are working to continuously improve the functionality of these tools to solve our customers’ business needs. In this role, you will be expected to be autonomous, guide other developers who might need technical help, collaborate with other technical teams, product, support and customer success. 


Your Responsibilities

  • Improve and create new lead capture domain models.
  • Build backend and frontend API features and architecture.
  • Work cross-functionally across our platform, experience, integrations, payments and marketplace teams.
  • Drive performance through benchmarking and optimization
  • Work with a wide range of systems, processes, and technologies to own and solve problems from end to end
  • Collaborate closely with our leadership team including engineers, designers, and product managers to build new features and products
  • Uphold high engineering standards and bring consistency to the many codebases and systems you will encounter.
  • Work on 1 to 2 products.
  • Create and improve lead capture tools like funnels, websites, forms, surveys, social media
  • Architect and build backend and frontend APIs and features

Your Core Skills

  • 5-7 years of experience as a full-stack software engineer worked on big scale projects and solved complex problems.
  • Proficient with various programming languages and tools such as but not limited to Javascript, TypeScript, Vue.js, NodeJS, and GraphQL
  • Must be able to work with a team and collaborate remotely.
  • You have an entrepreneurial mindset, are eager to take on different roles when necessary and know how to navigate a start-up environment.
  • You are fulfilled by being a generalist working on both the frontend, backend, and anything it takes to solve problems and delight users and take pride in working on projects involving a variety of technologies and systems.
  • Ability to stitch together many different services and processes together, even if you have not worked with them before.
  • Hold a great deal of empathy for your team and users, you are a steward of crafting great experiences.
  • Have great communication skills and can thrive in a highly collaborative environment when working cross-functionally with many stakeholders.
  • Driven by product quality, and innately know how to balance trade-offs with time to launch new features.
  • A keen eye for design and love to think about user flows and user experiences.
  • Must have experience with HTML5 and CSS3


Additional Skills

  • Experience with the Nuxt.js framework is a plus.
  • Experience with MongoDB profiling and query optimization.
  • Using CSS frameworks such as Bootstrap and TailwindCSS
  • Experience working in the GCP (Google Cloud Platform) ecosystem.
Read more
Three Dots

at Three Dots

2 recruiters
Akul Aggarwal
Posted by Akul Aggarwal
Bengaluru (Bangalore)
3 - 8 yrs
₹12L - ₹15L / yr
skill iconReact.js
skill iconJavascript
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconNodeJS (Node.js)
+4 more

Job Title: Senior Full Stack Engineer

Location: Bangalore

About threedots:

At threedots, we are committed to helping our customers navigate the complex world of secured credit financing. Our mission is to facilitate financial empowerment through innovative, secured credit solutions like Loans Against Property, Securities, FD & More. Founded by early members of Groww, we are a well funded startup with over $4M in funding from India’s top investors.


Role Overview:

The Senior Full Stack Engineer will be responsible for developing and managing our web infrastructure and leading a team of talented engineers. With a solid background in both front and back-end technologies, and a proven track record of developing scalable web applications, the ideal candidate will have a hands-on approach and a leader's mindset.


Key Responsibilities:

  • Lead the design, development, and deployment of our Node and ReactJS-based applications.
  • Architect scalable and maintainable web applications that can handle the needs of a rapidly growing user base.
  • Ensure the technical feasibility and smooth integration of UI/UX designs.
  • Optimize applications for maximum speed and scalability.
  • Implement comprehensive security and data protection.
  • Manage and review code contributed by the team and maintain high standards of software quality.
  • Deploy applications on AWS/GCP and manage server infrastructure.
  • Work collaboratively with cross-functional teams to define, design, and ship new features.
  • Provide technical leadership and mentorship to other team members.
  • Keep abreast with the latest technological advancements to leverage new tech and tools.

Minimum Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Minimum 3 years of experience as a full-stack developer.
  • Proficient in Node.js and ReactJS.
  • Experience with cloud services (AWS/GCP).
  • Solid understanding of web technologies, including HTML5, CSS3, JavaScript, and responsive design.
  • Experience with databases, web servers, and UI/UX design.
  • Strong problem-solving skills and the ability to make sound architectural decisions.
  • Proven ability to lead and mentor a tech team.

Preferred Qualifications:

  • Experience in fintech
  • Strong knowledge of software development methodologies and best practices.
  • Experience with CI/CD pipelines and automated testing.
  • Familiarity with microservices architecture.
  • Excellent communication and leadership skills.

What We Offer:

  • The opportunity to be part of a founding team and shape the company's future.
  • Competitive salary with equity options.
  • A creative and collaborative work environment.
  • Professional growth opportunities as the company expands.
  • Additional Startup Perks


Read more
Intellikart Ventures LLP
ramandeep intellikart
Posted by ramandeep intellikart
Bengaluru (Bangalore)
5 - 10 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Databases
+1 more

How You'll Contribute:

● Redefine Fintech architecture standards by building easy-to-use, highly scalable,robust, and flexible APIs

● In-depth analysis of the systems/architectures and predict potential future breakdown and proactively bring solution

● Partner with internal stakeholders, to identify potential features implementation on that could cater to our growing business needs

● Drive the team towards writing high-quality codes, tackle abstracts/flaws in system design to attain revved-up API performance, high code reusability and readability.

● Think through the complex Fintech infrastructure and propose an easy-to-deploy modular infrastructure that could adapt and adjust to the specific requirements of the growing client base

● Design and create for scale, optimized memory usage and high throughput performance.​


Skills Required:

● 5+ years of experience in the development of complex distributed systems

● Prior experience in building sustainable, reliable and secure microservice-based scalable architecture using Python Programming Language

● In-depth understanding of Python associated libraries and frameworks

● Strong involvement in managing and maintaining produc Ɵ on-level code with high volume API hits and low-latency APIs

● Strong knowledge of Data Structure, Algorithms, Design Patterns, Multi threading concepts, etc

● Ability to design and implement technical road maps for the system and components

● Bring in new software development practices, design/architecture innovations to make our Tech stack more robust

● Hands-on experience in cloud technologies like AWS/GCP/Azure as well as relational databases like MySQL/PostgreSQL or any NoSQL database like DynamoDB

Read more
Healthtech Startup
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
Google Cloud Platform (GCP)
bigquery

Description: 

As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights. 


Key Responsibilities: 

1. Team Leadership: 

a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management. 

b. Foster a culture of innovation, collaboration, and continuous learning within the team. 

2. Data Pipeline Development (Google Cloud Focus): 

a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.

b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP. 

3. Data Architecture and Optimization: 

a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently. 

b. Optimize data storage, processing, and retrieval for maximum 

performance and cost-effectiveness on GCP. 

4. Data Governance and Quality: 

a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues. 

5. Cross-functional Collaboration: 

a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights. 

b. Participate in discussions regarding data strategy and provide technical expertise. 

6. Documentation and Best Practices: 

a. Create and maintain documentation for data engineering processes, standards, and best practices. 

b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed. 


Qualifications 

● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field. 

● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform. 

● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage. 

● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java. 

● Excellent problem-solving and communication skills. 

● Leadership experience and the ability to manage and mentor a team.


Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Reqroots

at Reqroots

7 recruiters
Dhanalakshmi D
Posted by Dhanalakshmi D
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.

Experience: 4+ Yrs

Responsibilities:

• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.

• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization

• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity

• Collaborate continuously with the product development teams to implement CI/CD pipeline.

• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.


Mandatory Skills:

• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.

• Strong scripting skills (Java or Python) is a must.

• Experience with automation tools such as Ansible, Chef, Puppet etc.

• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle

• Hands-on working experience in developing or deploying microservices is a must.

• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.

• Knowledge about microservices hosted in leading cloud environments

• Experience with containerizing applications (Docker preferred) is a must

• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.

• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software


Mandatory Skills:

• Experience working with Secret management services such as HashiCorp Vault is desirable.

• Experience working with Identity and access management services such as Okta, Cognito is desirable.

• Experience with monitoring systems such as Prometheus, Grafana is desirable.


Educational Qualifications and Experience:

• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)

• 4 to 6 years of hands-on experience in server-side application development & DevOps

Read more
Stanza Living
Parul Pal
Posted by Parul Pal
Gurugram
4 - 7 yrs
₹20L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Who we are :


Stanza Living is India's largest and fastest growing tech-enabled, managed accommodation company that delivers a hospitality-led living experience to migrant students and young working professionals across India. We have a full-stack business model that focuses on design, development and delivery of daily living solutions tailored to the young consumers' lifestyle. From smartly-planned residences, host of amenities and services for hassle-free living to exclusive community engagement programmes - everything is seamlessly integrated through technology to ensure the highest consumer delight.


Today, we are :


  • India's largest managed accommodation company with over 75,000+ beds under management across 25+ cities
  • Most capitalized player in the managed accommodation space, backed by global marquee investors - Falcon Edge, Equity International, Sequoia Capital, Matrix Partners, Accel Partners
  • Recognized as the Best Real Estate Tech company across the Globe in 2020 by leading analysis agency, Tracxn
  • LinkedIn Top Startup to Work for - 2019


Objectives of this role :


- Work in tandem with our engineering team to identify and implement the most optimal cloud-based solutions for the company

- Define and document best practices and strategies regarding application deployment and infrastructure maintenance

- Provide guidance, thought leadership, and mentorship to developer teams to build their cloud competencies

- Ensure application performance, uptime, and scale, maintaining high standards for code quality and thoughtful design

- Manage cloud environments in accordance with company security guidelines


Job Description :


- Excellent understanding of Cloud Platform (AWS)

- Strong knowledge on AWS Services, design, configuration on enterprise systems

- Good knowledge on Kubernetes configuration, Dockers

- Understanding the needs of the business for defining AWS system specifications

- Understand Architecture Requirements and ensure effective support activities

- Evaluation and choosing suitable AWS Service or and suggesting methods for integration

- Overseeing assigned programs and guiding the team members

- Providing assistance when technical problems arise

- Making sure the agreed infrastructure and architecture are implemented

- Addressing the technical concerns, suggestions, and ideas

- Configure Monitoring systems to make sure they meet business goals as well as user requirements

- Excellent knowledge of AWS IaaS Layer

- Ability to lead & implement PS workloads or POCs

- Ensure continual knowledge management

Read more
Ignite Solutions

at Ignite Solutions

6 recruiters
Meghana Dhamale
Posted by Meghana Dhamale
Remote, Pune
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
LinkedIn
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
+2 more

We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends. 

 What will you work on?

  •  Interface with clients
  • Recommend tech stacks
  • Define end-to-end logical and cloud-native architectures
  •  Define APIs
  • Integrate with 3rd party systems
  • Create architectural solution prototypes
  • Hands-on coding, team lead, code reviews, and problem-solving

What Makes You A Great Fit?

  • 5+ years of software experience 
  • Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
  • Solid expertise and hands-on experience in Python with Flask or Django
  • Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
  • Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
  • Knowledge of DevOps practices
  • Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
  • Excellent communication skills, verbal and written

The job is for a full-time position at our https://goo.gl/maps/o67FWr1aedo">Pune (Viman Nagar) office. 

(Note: We are working remotely at the moment. However, once the COVID situation improves, the candidate will be expected to work from our office.)

Read more
Emint
Agency job
via anzy global by Roshan Muniraj
HSR Layout, Bangalore
5 - 8 yrs
₹25L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

FINTECH CANDIDATES ONLY


About the job:


Emint is a fintech startup with the mission to ‘Make the best investing product that Indian consumers love to use, with simplicity & intelligence at the core’. We are creating a platformthat

gives a holistic view of market dynamics which helps our users make smart & disciplined

investment decisions. Emint is founded by a stellar team of individuals who come with decades of

experience of investing in Indian & global markets. We are building a team of highly skilled &

disciplined team of professionals and looking at equally motivated individuals to be part of

Emint. Currently are looking at hiring a Devops to join our team at Bangalore.


Job Description :


Must Have:


• Hands on experience on AWS DEVOPS

• Experience in Unix with BASH scripting is must

• Experience working with Kubernetes, Docker.

• Experience in Gitlab, Github or Bitbucket artifactory

• Packaging, deployment

• CI/CD pipeline experience (Jenkins is preferable)

• CI/CD best practices


Good to Have:


• Startup Experience

• Knowledge of source code management guidelines

• Experience with deployment tools like Ansible/puppet/chef is preferable

• IAM knowledge

• Coding knowledge of Python adds value

• Test automation setup experience


Qualifications:


• Bachelor's degree or equivalent experience in Computer Science or related field

• Graduates from IIT / NIT/ BITS / IIIT preferred

• Professionals with fintech ( stock broking / banking ) preferred

• Experience in building & scaling B2C apps preferred

Read more
AmplifAI
Vijay Chavan
Posted by Vijay Chavan
Hyderabad
8 - 12 yrs
₹15L - ₹25L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconAngular (2+)
skill iconAngularJS (1.x)
ASP.NET
+9 more

Job Description

Technical lead who will be responsible for development, managing team(s), monitoring the tasks / sprint. They will also work with BA Persons to gather the new requirements and change request. They will help solve application issues and helping developers when they are stuck.

Responsibilities

·       Design and develop application based on the architecture provided by the solution architects.

·       Help team members and co developers to achieve their tasks.

·       Maintain / monitor the new work items and support issues and have to assign it to the respective developers.

·       Communicate with BA persons and Solution architects for the new requirements and change requests.

·       Resolve any support tickets with the help of your team within service timelines.

·       Manage sprint to achieve the targets.

Technical Skills

·       Microsoft .NET MVC

·       .NET Core 3.1 or greater

·       C#

·       Web API

·       Async Programming, Threading, and tasks

·       Test Driven Development 

·       Strong expert in SQL (Table Design, Programing, Optimization)

·       Azure Functions

·       Azure Storage

·       MongoDB, NoSQL

Qualifications/Skills Desired:

·       Any Bachelor’s degree relevant to Computer Science. MBA or equivalent is a plus

·       Minimum of 8-10 years IT experience and managing a team(s) out of which 4-5 years should be as a technical/team lead.

·       Strong verbal and written communication skills with the ability to adapt to many different personalities and conflict resolution skills required

·       Must have excellent organizational and time management skills with strong attention to detail

·       Confidentiality with privacy-sensitive customer and employee documents

·       Strong work ethic - demonstrate good attitude and judgment, discretion, and maintain high level of confidentiality

·       Previous experience of customer interactions

Read more
DevDarshan

at DevDarshan

1 recruiter
Suyash Taneja
Posted by Suyash Taneja
Remote only
5 - 10 yrs
₹20L - ₹30L / yr
DevOps
Software architecture
skill iconNodeJS (Node.js)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

DevDarshan is a devotional platform launched by IIT graduates to promote the teachings of Indian culture and the Hindu way of life in India around the world. In the 21st century, where everything around is digitized then why not temples. That’s the idea behind DevDarshan.We’ve built a community of devotees from multiple Countries, through our Mobile Application that connects Temples and Devotees, have successfully raised seed investment and also started to generate revenue for the temples and Priests associated with us. Right now we are looking to grow our team and build new exciting features for devotees all around the world.

This is where you come in.

We are looking for a passionate and self-motivated individual to help design our backend Systems to support both the Mobile App and WebApp


Requirements:

  • Experience in NodeJS, Typescript, ExpressJS AWS EC2. You have built backend REST API’s
  • Expert in System Design and Software Architecture Processes, How different components interact with each other in scale
  • Experience with DevOps, Docker, AWS, Google Cloud.
  • Experience in Managing Development Teams, complete delivery lifecycle
  • Good understanding and experience of NoSQL and SQL Databases, which to be used when.
  • Experience with CI/CD Systems like Jenkins, Github Actions.
  • Some Experience with Realtime Databases/Systems or Socket based applications would be preferred.
  • Some Experience with building Algorithms, Social Apps is preferred.
  • Any experience with Handling Video Delivery like ffmpeg/HLS/WebRTC is preferred but not mandatory.


The Role

  • You will be involved at all stages of the product development process, from design to development and deployment.
  • You will architect, build, scale, backend systems that powers our applications which will be used by millions of devotees every day.
  • You possess a passion for improving techniques, processes, tracking, and continuously improve our engineering practices and would work on a daily basis towards that


Read more
DevDarshan

at DevDarshan

1 recruiter
Suyash Taneja
Posted by Suyash Taneja
Remote only
3 - 10 yrs
₹20L - ₹30L / yr
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
skill iconAmazon Web Services (AWS)
+5 more

DevDarshan is a devotional platform launched by IIT graduates to promote the teachings of Indian culture and the Hindu way of life in India around the world. In the 21st century, where everything around is digitized then why not temples. That’s the idea behind DevDarshan.We’ve built a community of devotees from multiple Countries, through our Mobile Application that connects Temples and Devotees, have successfully raised seed investment and also started to generate revenue for the temples and Priests associated with us. Right now we are looking to grow our team and build new exciting features for devotees all around the world.

This is where you come in.

We are looking for a passionate and self-motivated individual to help design our backend Systems to support both the Mobile App and WebApp


Requirements

  • Experience in NodeJS, Typescript, ExpressJS AWS EC2. You have built backend REST API’s
  • Expert in System Design and Software Architecture Processes, How different components interact with each other in scale
  • Experience with DevOps, Docker, AWS, Google Cloud.
  • Experience in Managing Development Teams, complete delivery lifecycle
  • Good understanding and experience of NoSQL and SQL Databases, which to be used when. 
  • Experience with CI/CD Systems like Jenkins, Github Actions.
  • Some Experience with Realtime Databases/Systems or Socket based applications would be preferred.
  • Some Experience with building Algorithms, Social Apps is preferred. 
  • Any experience with Handling Video Delivery like ffmpeg/HLS/WebRTC is preferred but not mandatory.



The Role

This Role naturally progresses into Engineering Manager / Software Architect.

  • You will be involved at all stages of the product development process, from design to development and deployment.
  • You will architect, build, scale, backend systems that powers our applications which will be used by millions of devotees every day.
  • You possess a passion for improving techniques, processes, tracking, and continuously improve our engineering practices and would work on a daily basis towards that


Read more
InnoMick Technology Pvt Ltd

at InnoMick Technology Pvt Ltd

2 candid answers
Sravani Vadranam
Posted by Sravani Vadranam
Hyderabad
6 - 6 yrs
₹10L - ₹15L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more



Position: Technical Architect

Location: Hyderabad

 Experience: 6+ years


Job Summary:

We are looking for an experienced Technical Architect with a strong background in Python, Node.js, and React to lead the design and development of complex and scalable software solutions. The ideal candidate will possess exceptional technical skills, a deep understanding of software architecture principles, and a proven track record of successfully delivering high-quality projects. You should be capable of leading a cross-functional team that's responsible for the full software development life cycle, from conception to deployment with Agile methodologies.

 

 

Responsibilities:

●       Lead the design, development, and deployment of software solutions, ensuring architectural integrity and high performance.

●       Collaborate with cross-functional teams, including developers, designers, and product managers, to define technical requirements and create effective solutions.

●       Provide technical guidance and mentorship to development teams, ensuring best practices and coding standards are followed.

●       Evaluate and recommend appropriate technologies, frameworks, and tools to achieve project goals.

●       Drive continuous improvement by staying updated with industry trends, emerging technologies, and best practices.

●       Conduct code reviews, identify areas of improvement, and promote a culture of excellence in software development.

●       Participate in architectural discussions, making strategic decisions and aligning technical solutions with business objectives.

●       Troubleshoot and resolve complex technical issues, ensuring optimal performance and reliability of software applications.

●       Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.

●       Define and enforce architectural patterns, ensuring scalability, security, and maintainability of systems.

●       Lead efforts to refactor and optimize existing codebase, enhancing performance and maintainability.

Qualifications:

●       Bachelor's degree in Computer Science, Software Engineering, or a related field. Master's degree is a plus.

●       Minimum of 8 years of experience in software development with a focus on Python, Node.js, and React.

●       Proven experience as a Technical Architect, leading the design and development of complex software systems.

●       Strong expertise in software architecture principles, design patterns, and best practices.

●       Extensive hands-on experience with Python, Node.js, and React, including designing and implementing scalable applications.

●       Solid understanding of microservices architecture, RESTful APIs, and cloud technologies (AWS, GCP, or Azure).

●       Extensive knowledge of JavaScript, web stacks, libraries, and frameworks.

●       Should create automation test cases and unit test cases (optional)

●       Proficiency in database design, optimization, and data modeling.

●       Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes).

●       Excellent problem-solving skills and the ability to troubleshoot complex technical issues.

●       Strong communication skills, both written and verbal, with the ability to effectively interact with cross-functional teams.

●       Prior experience in mentoring and coaching development teams.

●       Strong leadership qualities with a passion for technology innovation.

●        have experience in using Linux-based development environments using GitHub and CI/CD

●       Atlassian stack (JIRA/Confluence)






 



Read more
Staffbee Solutions INC
Remote only
6 - 10 yrs
₹1L - ₹1.5L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+11 more

Looking for freelance?

We are seeking a freelance Data Engineer with 7+ years of experience

 

Skills Required: Deep knowledge in any cloud (AWS, Azure , Google cloud), Data bricks, Data lakes, Data Ware housing Python/Scala , SQL, BI, and other analytics systems

 

What we are looking for

We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes

 

  • The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems
  • Strong in scripting languages like Python, Scala
  • 5+ years of hands-on experience with one or more of these data integration/ETL tools.
  • Experience building on-prem data warehousing solutions.
  • Experience with designing and developing ETLs, Data Marts, Star Schema
  • Designing a data warehouse solution using Synapse or Azure SQL DB
  • Experience building pipelines using Synapse or Azure Data Factory to ingest data from various sources
  • Understanding of integration run times available in Azure.
  • Advanced working SQL knowledge and experience working with relational databases, and queries. authoring (SQL) as well as working familiarity with a variety of database


Read more
Nayan Technologies
Agency job
via OptimHire by Ajdevi Kindo
South Extention II, Delhi
4 - 9 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description


BUDGET: 20 LPA (MAX)

What you will do - Key Responsibilities


  • DevOps architect will be responsible for testing, QC, debugging support, all of the various Server Side and Java software/servers for various products developed or procured by the company, will debug problems with integration of all software, on-field deployment issues and suggest improvements/work-arounds("hacks") and structured solutions/approaches.
  • Responsible for Scaling the architecture towards 10M+ users.
  • Will work closely with other team members including other Web Developers, Software Developers, Application Engineers, product managers to test and deploy existing products for various specialists and personnel using the software.
  • Will act in capacity of Team Lead as necessary to coordinate and organize individual effort towards a successful completion / demo of an application.
  • Will be solely responsible for the application approval before demo to clients, sponsors and investors.


Essential Requirements


  • Should understand the ins and outs of Docker and Kubernetes
  • Can architect complex cloud-based solutions using multiple products on either AWS or GCP
  • Should have a solid understanding of cryptography and secure communication
  • Know your way around Unix systems and can write complex shell scripts comfortably
  • Should have a solid understanding of Processes and Thread Scheduling at the OS level
  • Skilled with Ruby, Python or similar scripting languages
  • Experienced with installing and managing multiple GPUs spread across multiple machines
  • Should have at least 5 years managing large server deployments

Category

DevOps Engineer (IT & Networking)

Expertise

DevOps - 3 Years - Intermediate Python - 2 Years AWS - 3 Years - Intermediate Docker - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate 

Read more
Bito Inc

at Bito Inc

2 recruiters
Amrit Dash
Posted by Amrit Dash
Remote only
5 - 8 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Windows Azure
Ansible
Chef
+7 more

Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.

 

Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!

 

We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.

 

We are hiring a DevOps Engineer to join our team.

 

Responsibilities:

  • Collaborate with the development team to design, develop, and implement Java-based applications
  • Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
  • Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
  • Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
  • Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
  • Evaluate and define/modify configuration management strategies and processes using Ansible
  • Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
  • Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort

Requirements:

  • Minimum 4+ years of relevant work experience in a DevOps role
  • At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
  • Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
  • Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
  • Mastery in configuration automation tool sets such as Ansible, Chef, etc
  • Proficiency with Jira, Confluence, and Git toolset
  • Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
  • Proven ability to manage and prioritize multiple diverse projects simultaneously

What do we offer: 

At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology. 

·               Work from anywhere 

·               Flexible work timings 

·               Competitive compensation, including stock options 

·               A chance to work in the exciting generative AI space 

·               Quarterly team offsite events

Read more
Hyno Technologies

at Hyno Technologies

4 recruiters
Shreeya Khandelwal
Posted by Shreeya Khandelwal
Remote only
5 - 7 yrs
₹12L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description: DevOps Engineer 



About Hyno:


Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.

As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.


Position: DevOps Engineer

Experience: 5-7 years


Responsibilities:


  • Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
  • Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
  • Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
  • Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
  • Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
  • Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
  • Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
  • Follow regulatory and ISO 13485 requirements.
  • Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.






Requirements:


  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
  • Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
  • Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
  • Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
  • Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
  • Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
  • Familiarity with configuration management tools like Ansible, Puppet, or Chef.
  • Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
  • Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
  • Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
  • Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
  • Strong communication skills and the ability to work independently as well as in a team.


Nice to Have:


  • Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
  • Experience with microservices architecture and serverless computing.


Soft Skills:

  • Excellent written and verbal communication skills.
  • Ability to manage conflict effectively.
  • Ability to adapt and be productive in a dynamic environment.
  • Strong communication and collaboration skills supporting multiple stakeholders and business operations.
  • Self-starter, self-managed, and a team player.


Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.


Read more
APIwiz
Balaji Vijayan
Posted by Balaji Vijayan
Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Linux/Unix
skill iconDocker
skill iconKubernetes
+1 more

Overview

Apiwiz (Itorix Inc) is looking for software engineers to join our team, grow with us, introduce us to new ideas and develop products that empower our users. Every day, you’ll work with team members across disciplines developing products for Apiwiz (Itorix Inc). You’ll interact daily with our product managers to understand our domain and create technical solutions that push us forward. We want to work with other engineers who bring knowledge and excitement about our opportunities.

You will impact major features and new product decisions as part of our remarkably high performing, collaborative team of engineers who thrive on the business impact of their work. With strong team support and significant freedom and self direction, you will experience the wealth of interesting, challenging problems that only a high growth startup can provide.


Roles & Responsibilities

  • Build, configure, and manage cloud compute and data storage infrastructure for multiple instances of AWS and Google Cloud Platform.
  • Manage VPCs, security groups, and user access to our various public cloud systems and services.
  • Develop processes and procedures for using cloud-based infrastructures, including, access key rotation, disaster recovery, and building new services.
  • Help the business control costs by categorizing and tagging assets running in the cloud.
  • Develop scripts and workflows to manage cloud computing systems
  • Provide oversight on log aggregation and application performance monitoring surrounding our production environments.

What we’re looking for

  • 2-3 years of experience in the provision, configuring, administrating, automating, monitoring, and supporting enterprise Cloud services
  • Strong experience in designing, building, maintaining and securing AWS resources for high-availability and production level systems and services
  • Familiar with Cloud concepts with practical hands-on experience on any Cloud Platform.
  • Hands-on experience with AWS services like Elastic Compute Cloud (EC2), Elastic Load-balancers, S3, Elastic File system, VPC, Route53, and IAM.
  • Providing 24/7 support for the application and Infrastructure support
  • Prior experience using infrastructure as a code software tool like Terraform.
  • Knowledge in software provisioning, configuration management, and application-deployment tools like Ansible.
  • Working knowledge of container technologies like Docker & Kubernetes cluster operations.
  • Familiarity with software automation tools Git, Jenkins, Code Pipeline, SonarQube


Read more
Impetus technologies
any where in india
10 - 12 yrs
₹3L - ₹15L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+11 more

Experience:


Should have a minimum of 10-12 years of Experience.

Should have experience on Product Development/Maintenance/Production Support experience in a support organization

Should have a good understanding of services business for fortune 1000 from the operations point of view

Ability to read, understand and communicate complex technical information

Ability to express ideas in an organized, articulate and concise manner

Ability to face stressful situation with positive attitude

Any certification in regards to support services will be an added advantage

 


Education: BE, B- Tech (CS), MCA

Location: India

Primary Skills:

 

Hands on experience with OpenStack framework. Ability to set up private cloud using OpenStack environment. Awareness to various OpenStack services and modules

Strong experience with OpenStack services like Neutron, Cinder, Keystone, etc.

Proficiency in programming languages such as Python, Ruby, or Go.

Strong knowledge of Linux systems administration and networking.

Familiarity with virtualization technologies like KVM or VMware.

Experience with configuration management and IaC tools like Ansible, Terraform.

Subject matter expertise in OpenStack security

Solid experience with Linux and shell scripting

Sound knowledge of cloud computing concepts & technologies, such as docker, Kubernetes, AWS, GCP, Azure etc.

Ability to configure OpenStack environment for optimum resources

Good knowledge of security, operations in open stack environment

Strong knowledge of Linux internals, networking, storage, security

Strong knowledge of VMware Enterprise products (ESX, vCenter)

Hands on experience with HEAT orchestration

Experience with CI/CD, monitoring, operational aspects

Strong experience working with Rest API's, JSON

Exposure to Big data technologies ( Messaging queues, Hadoop/MPP, NoSQL databases)

Hands on experience with open source monitoring tools like Grafana/Prometheus/Nagios/Ganglia/Zabbix etc.

Strong verbal and written communication skills are mandatory

Excellent analytical and problem solving skills are mandatory

 

Role & Responsibilities


Advise customers and colleagues on cloud and virtualization topics

Work with the architecture team on cloud design projects using openstack

Collaborate with product, customer success, and presales on customer projects

Participate in onsite assessments and workshops when requested 

Provide subject matter expertise and mentor colleagues

Set up open stack environments for projects

Design, deploy, and maintain OpenStack infrastructure.

Collaborate with cross-functional chapters to integrate OpenStack with other services (k8s, DBaaS)

Develop automation scripts and tools to streamline OpenStack operations.

Troubleshoot and resolve issues related to OpenStack services.

Monitor and optimize the performance and scalability of OpenStack components.

Stay updated with the latest OpenStack releases and contribute to the OpenStack community.

Work closely with Architects and Product Management to understand requirement

should be capable of working independently & responsible for end-to-end implementation

Should work with complete ownership and handle all issues without missing SLA's

Work closely with engineering team and support team

Should be able to debug the issues and report appropriately in the ticketing system

Contribute to improve the efficiency of the assignment by quality improvements & innovative suggestions

Should be able to debug/create scripts for automation

Should be able to configure monitoring utilities & set up alerts

Should be hands on in setting up OS, applications, databases and have passion to learn new technologies

Should be able to scan logs, errors, exception and get to the root cause of the issue

Contribute in developing a knowledge base on collaboration with other team members

Maintain customer loyalty through Integrity and accountability

Groom and mentor team members on project technologies and work

Read more
Nvizion Solutions

at Nvizion Solutions

1 recruiter
Anshita Abhilasha
Posted by Anshita Abhilasha
Remote only
3 - 6 yrs
₹6L - ₹15L / yr
DevOps
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Linux/Unix
JIRA
+3 more

Nvizion Solutions is looking for the position of Site Reliability Engineer.

 

If interested, kindly share your resume along with contact details.

 

 

Title: Site Reliability Engineer

No. of job openings: 2

Location:Gurgaon/ Hyderabad/ Bengaluru/ Mumbai/Chennai ( Remote location)

Remuneration:Best in the Industry

 

 

·      Experience required: 2 to 4 yrs in the industry

·      Ensuring overall System's reliability

·      Add automation and alerting in the system

·      Providing Troubleshooting support

·      Cross team communications. Working closely with Product team and Customer success team.

·      Proactive support - to ensures the system is back to the healthy state

·      R&D for new tools/technologies to support product and support team

·      Good verbal/written communication to connect with the client.

·      Good team player with a zeal to learn new technologies.

·      The candidate will be part of the team responsible for 24X7 monitoring of distributed global platform.

  • Linux Scripting
  • CI/CD knowledge (Jenkins/ BitBucket Pipelie /GitOps)
  • Version Control
  • Cloud platform knowledge (GCP/AWS/Azure/Digital Ocean)
  • Docker, Kubernetes

 

Read more
Young Pre Series A product start-up
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹22L / yr
CI/CD
skill iconPython
Bash
skill iconRuby
skill iconJenkins
+6 more

we’d love to speak with you. Skills and Qualifications:

Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.

Proficiency in scripting languages such as Python, Bash, or Ruby.


Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.

Experience with cloud platforms such as AWS, Azure, or GCP.


Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.

Experience with version control systems such as Git.


Familiarity with Agile methodologies and practices.


Understanding of networking concepts and principles.


Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.


Good understanding of security and data protection principles.


Roles and responsibilities:

● Building and setting up new development tools and infrastructure

● Working on ways to automate and improve development and release processes

● Deploy updates and fixes

● Helping to ensure information security best practices

● Provide Level 2 technical support

● Perform root cause analysis for production errors

● Investigate and resolve technical issues

Read more
Mumbai
2 - 4 yrs
₹4L - ₹8L / yr
Business development
sales
Cloud Computing
Presales
google workspace
+1 more

Job Responsibilities:

  • Google BDM sells Google Cloud solutions: Google Workspace (SaaS), Google Cloud Platform (IaaS / PaaS), Apigee (API management), and Anthos
  • A key focus is the local market
  • You should expect interesting and innovative projects like Modern Workplace, Cloud Computing, API management, and application modernization platforms
  • An ideal candidate would combine the ability to run a full cycle of pre-sales activities and perform basic deployment (setting up MX records, mail routing configurations, and domain verification)
  • Being a BDM means driving business opportunities to meet KPIs, and combine own experience with presales/engineering resources locally and globally.
  • You will drive the pipeline with Account Team, work on opportunities from Google and run marketing activities like webinars and seminars. Targeted customers would include both SMBs and Enterprises.
  • You will report to the CEO of the company

What you'll do:

  • Selling cloud solutions from Google (IaaS / PaaS / SaaS)
  • Product Consultations for both customers and sales managers
  • Interaction with the vendor;
  • Participation in presentations/webinars, and marketing events as a speaker.
  • Certification for Google products
  • Perform sales reports
  • Perform basic deployments

Profile requirements:

  • At least 3 years of experience in sales of corporate software or cloud services
  • Knowledge of solutions of public cloud providers - Google, Amazon, Microsoft, etc.
  • Experience in selling IaaS / PaaS / SaaS services is an advantage
  • Literacy, the ability to conduct presentations
  • High self-organization, focus on results
  • Flexibility and diplomacy
  • Knowledge of cloud technologies
  • The desire to work in a team
  • Ability to work in multitasking mode
  • English (at least intermediate - both written and verbal)
Read more
Navi Mumbai, Mumbai
1 - 4 yrs
₹2L - ₹3.6L / yr
Microsoft Windows
Linux/Unix
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+2 more

Hello,


Greetings for the day !!!


Tridat Technologies is hiring "L1 Windows Server Administrator" for one of the advanced technology solutions company catering to the needs of the Banking, Mobility, Payments and Government sectors.


Qualifications: Any graduate


Experience: 2+ yrs


Roles & Responsibilities:

• Windows /Linux OS Administration ( Certification will be added value).

• Trouble shooting knowledge

• AD knowledge.

• Cloud (AWS / Azure /GCP) knowledge ( Certification will be added value).

• Good communication skill

• Team work

• Remote management idea

• 24x7 Support

Experience in Virtualisation (Vmware & hYperv)

• Tickets and ITSM process idea.


Location: Rabale, Navi Mumbai


Working Timing: 24*7 rotational shifts


Employment Mode: Contract to hire (Full time opportunity)


Joining Period: Immediate to max 15 days





Thank You & Regards,

Shraddha Kamble

HR Recruiter


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort