Cutshort logo
Google Cloud Platform (GCP) Jobs in Hyderabad

50+ Google Cloud Platform (GCP) Jobs in Hyderabad | Google Cloud Platform (GCP) Job openings in Hyderabad

Apply to 50+ Google Cloud Platform (GCP) Jobs in Hyderabad on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
Fintrac Global services
Hyderabad
5 - 8 yrs
₹5L - ₹15L / yr
skill iconPython
Bash
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Required Qualifications: 

∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience. 

∙5+ years of experience in a DevOps role, preferably for a SaaS or software company. 

∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP). 

∙Proficiency in scripting languages (e.g., Python, Bash, Ruby). 

∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI). 

∙Extensive experience with NGINX and similar web servers. 

∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). 

∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation). 

∙Ability to work on-call as needed and respond to emergencies in a timely manner. 

∙Experience with high transactional e-commerce platforms.


Preferred Qualifications: 

∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer, 

Azure DevOps Engineer Expert). 

∙Experience in a high availability, 24x7x365 environment. 

∙Strong collaboration, communication, and interpersonal skills. 

∙Ability to work independently and as part of a team.

Read more
AmplifAI
Vijay Chavan
Posted by Vijay Chavan
Hyderabad
8 - 12 yrs
₹15L - ₹25L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconAngular (2+)
skill iconAngularJS (1.x)
ASP.NET
+9 more

Job Description

Technical lead who will be responsible for development, managing team(s), monitoring the tasks / sprint. They will also work with BA Persons to gather the new requirements and change request. They will help solve application issues and helping developers when they are stuck.

Responsibilities

·       Design and develop application based on the architecture provided by the solution architects.

·       Help team members and co developers to achieve their tasks.

·       Maintain / monitor the new work items and support issues and have to assign it to the respective developers.

·       Communicate with BA persons and Solution architects for the new requirements and change requests.

·       Resolve any support tickets with the help of your team within service timelines.

·       Manage sprint to achieve the targets.

Technical Skills

·       Microsoft .NET MVC

·       .NET Core 3.1 or greater

·       C#

·       Web API

·       Async Programming, Threading, and tasks

·       Test Driven Development 

·       Strong expert in SQL (Table Design, Programing, Optimization)

·       Azure Functions

·       Azure Storage

·       MongoDB, NoSQL

Qualifications/Skills Desired:

·       Any Bachelor’s degree relevant to Computer Science. MBA or equivalent is a plus

·       Minimum of 8-10 years IT experience and managing a team(s) out of which 4-5 years should be as a technical/team lead.

·       Strong verbal and written communication skills with the ability to adapt to many different personalities and conflict resolution skills required

·       Must have excellent organizational and time management skills with strong attention to detail

·       Confidentiality with privacy-sensitive customer and employee documents

·       Strong work ethic - demonstrate good attitude and judgment, discretion, and maintain high level of confidentiality

·       Previous experience of customer interactions

Read more
InnoMick Technology Pvt Ltd

at InnoMick Technology Pvt Ltd

2 candid answers
Sravani Vadranam
Posted by Sravani Vadranam
Hyderabad
6 - 6 yrs
₹10L - ₹15L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more



Position: Technical Architect

Location: Hyderabad

 Experience: 6+ years


Job Summary:

We are looking for an experienced Technical Architect with a strong background in Python, Node.js, and React to lead the design and development of complex and scalable software solutions. The ideal candidate will possess exceptional technical skills, a deep understanding of software architecture principles, and a proven track record of successfully delivering high-quality projects. You should be capable of leading a cross-functional team that's responsible for the full software development life cycle, from conception to deployment with Agile methodologies.

 

 

Responsibilities:

●       Lead the design, development, and deployment of software solutions, ensuring architectural integrity and high performance.

●       Collaborate with cross-functional teams, including developers, designers, and product managers, to define technical requirements and create effective solutions.

●       Provide technical guidance and mentorship to development teams, ensuring best practices and coding standards are followed.

●       Evaluate and recommend appropriate technologies, frameworks, and tools to achieve project goals.

●       Drive continuous improvement by staying updated with industry trends, emerging technologies, and best practices.

●       Conduct code reviews, identify areas of improvement, and promote a culture of excellence in software development.

●       Participate in architectural discussions, making strategic decisions and aligning technical solutions with business objectives.

●       Troubleshoot and resolve complex technical issues, ensuring optimal performance and reliability of software applications.

●       Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.

●       Define and enforce architectural patterns, ensuring scalability, security, and maintainability of systems.

●       Lead efforts to refactor and optimize existing codebase, enhancing performance and maintainability.

Qualifications:

●       Bachelor's degree in Computer Science, Software Engineering, or a related field. Master's degree is a plus.

●       Minimum of 8 years of experience in software development with a focus on Python, Node.js, and React.

●       Proven experience as a Technical Architect, leading the design and development of complex software systems.

●       Strong expertise in software architecture principles, design patterns, and best practices.

●       Extensive hands-on experience with Python, Node.js, and React, including designing and implementing scalable applications.

●       Solid understanding of microservices architecture, RESTful APIs, and cloud technologies (AWS, GCP, or Azure).

●       Extensive knowledge of JavaScript, web stacks, libraries, and frameworks.

●       Should create automation test cases and unit test cases (optional)

●       Proficiency in database design, optimization, and data modeling.

●       Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes).

●       Excellent problem-solving skills and the ability to troubleshoot complex technical issues.

●       Strong communication skills, both written and verbal, with the ability to effectively interact with cross-functional teams.

●       Prior experience in mentoring and coaching development teams.

●       Strong leadership qualities with a passion for technology innovation.

●        have experience in using Linux-based development environments using GitHub and CI/CD

●       Atlassian stack (JIRA/Confluence)






 



Read more
Codemonk

at Codemonk

2 recruiters
TA Codemonk
Posted by TA Codemonk
Hyderabad
4 - 8 yrs
₹5L - ₹18L / yr
DevOps
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
+8 more

We are looking for highly motivated Senior DevOps Engineer who can thrive in a fast-paced agile environment. You will be part of our scrum teams and contribute to the development of Java-based applications as well as participate to our DevOps community of practice.

 

Technically, you are proficient with the principles behind CI/CD, immutable infrastructure, Git Ops, Infra as Code and a best-in-class toolchain for a highly productive Developer Experience in a cloud first environment.

 

The ideal candidate will be passionate about repeatable process and automation, and understand the challenging decisions involved in creating scale, resilience, security and availability. The role will require co-ordination with Software Development and DevOps, SRE, Architecture.

 

Responsibilities:

·      Create deployment automation plans for high-throughput and low response time applications.

·      Build automation for deployments on-prem and in the cloud

·      Implement Continuous Integration on Jenkins

·      Build out application deployment containers to AWS (Amazon Web Services) EKS (Elastic Kubernetes Services) Red Hat Openshift

·      Coordinate with software engineering, infrastructure, network teams and other DevOps engineers

·      Build tools for testing, automation and monitoring that can improve predictability and reliability of deployments

 

Qualifications:

·      Master’s or bachelor’s degree in CS or Engineering

·      4+ years of experience working in Infrastructure roles

·      2+ years of experience with DevOps

·      Strong experience with containers (Docker, Kubernetes, and Helm)

·      Fluency in Python or other programming or scripting language

·      Strong background in networking and server management on RHT Linux

·      Strong knowledge of the DevOps tool chain on the AWS Linux platform: Jenkins, Groovy, Nexus OSS, python/ java, ansible, code pipeline, confluence, git, cloud formation, etc. Strong knowledge on Infrastructure as a code framework such as Terraform and Ansible

·      Experience using APM tools like Splunk, Dynatrace

·      Experience with automated testing tools such as Selenium, Cucumber or Server Spec

·      Experience with automated load testing tools (JMeter)

·      Experience deploying automation solutions in a public cloud environment such as AWS

·      Operationally savvy, experience with monitoring, alerting, and analyzing system metrics to identify problems and understand system behavior

·      Ability to work in a fast-paced environment

·      Experience with Agile software development methodology

·      Effective communication and collaboration skills

·      Strong problem-solving skills

·      A passion for innovation

Collaboration, drive open communication and reach across functional borders

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
Electrum

at Electrum

4 candid answers
Nandini Matla
Posted by Nandini Matla
Hyderabad
4 - 9 yrs
₹4L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Electrum is looking for an experienced and proficient DevOps Engineer. This role will provide you with an opportunity to explore what’s possible in a collaborative and innovative work environment. If your goal is to work with a team of talented professionals that is keenly focused on solving complex business problems and supporting product innovation with technology, you might be our new DevOps Engineer. With this position, you will be involved in building out systems for our rapidly expanding team, enabling the whole engineering group to operate more effectively and iterate at top speed in an open, collaborative environment. The ideal candidate will have a solid background in software engineering and a vivid experience in deploying product updates, identifying production issues, and implementing integrations. The ideal candidate has proven capabilities and experience in risk-taking, is willing to take up challenges, and is a strong believer in efficiency and innovation with exceptional communication and documentation skills.

YOU WILL:

  • Plan for future infrastructure as well as maintain & optimize the existing infrastructure. 
  • Conceptualize, architect, and build:
  • 1. Automated deployment pipelines in a CI/CD environment like Jenkins;
  • 2. Infrastructure using Docker, Kubernetes, and other serverless platforms; 
  • 3. Secured network utilizing VPCs with inputs from the security team. 
  • Work with developers & QA team to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional, and performance status.
  • Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments. 
  • Triage release of applications/ Hotfixes to the production environment on a daily basis.
  • Interface with developers and triage SQL queries that need to be executed in production environments.
  • Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
  • Assist the developers and on calls for other teams with a postmortem, follow up and review of issues affecting production availability.
  • Scale Electum platform to handle millions of requests concurrently.
  • Reduce Mean Time To Recovery (MTTR), enable High Availability and Disaster Recovery

PREREQUISITES:

  • Bachelor’s degree in engineering, computer science, or related field, or equivalent work experience.
  • Minimum of six years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructures such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail, and other services provided by AWS.
  • At least 2 years of experience in building and owning serverless infrastructure.
  • At least 2 years of scripting experience in Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible).
  • Experience building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost.
  • Experience in automating the provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
  • Must have prior experience automating deployments to production and lower environments.
  • Worked on providing solutions for major automation with scripts or infrastructure.
  • Experience with APM tools such as DataDog and log management tools.
  • Experience in designing and implementing Essential Functions System Architecture Process; establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
  • Experience establishing and enforcing:
  • 1. System monitoring tools and standards
  • 2. Risk Assessment policies and standards
  • 3. Escalation policies and standards
  • Excellent DevOps engineering, team management, and collaboration skills.
  • Advanced knowledge of programming languages such as Python and writing code and scripts.
  • Experience or knowledge in - Application Performance Monitoring (APM), and prior experience as an open-source contributor will be preferred.


Read more
Digitalshakha
Saurabh Deshmukh
Posted by Saurabh Deshmukh
Bengaluru (Bangalore), Mumbai, Hyderabad
1 - 5 yrs
₹2L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

Main tasks

  • Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
  • Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
  • Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
  • Implementation of installations of the solution especially in the container context
  • Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
  • Maintenance of the system installation documentation and implementation of trainings

Execution of internal software tests and support of involved teams and stakeholders

  • Hands on Experience with Azure DevOps.

Qualification profile

  • Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
  • Experience in software
  • Installation and administration of Linux and Windows systems including network and firewalling aspects
  • Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
  • Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
  • Server environments, especially application, web-and database servers
  • Knowledge in VMware/K3D/Rancer is an advantage
  • Good spoken and written knowledge of English


Read more
Zobaze technologies

at Zobaze technologies

3 recruiters
Karthik Sutrave
Posted by Karthik Sutrave
Hyderabad
1 - 5 yrs
₹8L - ₹14L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+11 more

About

Zobaze builds shop management tools for the common shop keeper and aims to build an entire retail and restaurant management infrastructure for the world.


As a full-stack engineer, you will work with the founders, support engineers, and mobile app engineers. We are a small team, but we are a hardworking and passionate bunch, motivated by collaboration, strong results and the impact Zobaze makes on our customers.

 

Responsibilities:

  • Develop and maintain features for our backend APIs, web office editor, and online ecommerce platform.
  • Build new product lines (e. g. Internal tools for analytics/management, import/export utilities, databases)
  • Help assess and recruit future engineers.

 

Requirements:

  • Zobaze is primarily built in Javascript, with NodeJS on the backend and Vue on the frontend, with MongoDB and Firebase as the key databases.
  • We use a myriad of other tools, which we expect you to learn when required. Being a good learner is a must.
  • 2+ years of experience in professional software development, startup experience is ideal
  • Experience owning technically challenging and demanding projects.
  • Experience across the entire stack, from backend to frontend.
  • Knowledge in any of Vue/React/Angular 2+
  • Experience in any modern cloud infra (AWS/GCP/Azure)
  • A customer-first mindset, to ensure what we build meets what customers need
  • Introspective nature and result oriented attitude


Read more
NowFloats Technologies Pvt Ltd
Ashish Kumar
Posted by Ashish Kumar
Hyderabad
6 - 15 yrs
₹5L - ₹30L / yr
skill icon.NET
ASP.NET
skill iconC#
Data Structures
skill iconAmazon Web Services (AWS)
+1 more

Looking for technical lead in .Net who are having good experience in .net Domain with cloud platform along with data structure and algorithms.


looking only for immediate joiners in Hyderabad region.

Read more
Bourntec Solutions Inc
Mohammed Afzal
Posted by Mohammed Afzal
Hyderabad
10 - 19 yrs
₹20L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
Windows Azure
Google Cloud Platform (GCP)
+9 more

Key Sills Required for Lead DevOps Engineer

Containerization Technologies

Docker, Kubernetes, OpenShift

Cloud Technologies

AWS/Azure, GCP

CI/CD Pipeline Tools

Jenkins, Azure Devops

Configuration Management Tools

Ansible, Chef,

SCM Tools

Git, GitHub, Bitbucket

Monitoring Tools

New Relic, Nagios, Prometheus

Cloud Infra Automation

Terraform

Scripting Languages

Python, Shell, Groovy

 

·       Ability to decide the Architecture for the project and tools as per the availability

·       Sound knowledge required in the deployment strategies and able to define the timelines

·       Team handling skills are a must

·       Debugging skills are an advantage

·       Good to have knowledge of Databases like Mysql, Postgresql

It is advantageous to be familiar with Kafka. RabbitMQ

·       Good to have knowledge of Web servers to deploy web applications

·       Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning

·       Advantage to having experience in DevSecOps

Note: Tools mentioned in bold are a must and others are added advantage

Read more
F5 Networks
Gopi Daggumilli
Posted by Gopi Daggumilli
Hyderabad
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
OpenStack
openshift
+16 more

POSITION SUMMARY:

We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.

                                                                                                

EXPERIENCE

7- 9+ Years – Software Engineer III

 

PRIMARY RESPONSIBILITIES:

  • Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.

  • Work closely with the customers and understand the requirements and get it done on timelines.

  • Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.

  • Work closely with the Team and complete the deliverables on-time

  • Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.

  • Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.

  • Drive automation efforts for the configuration and maintainability of the public/private Cloud.  

  • Lead product selection for replacement or new technologies

  • Address user tickets in a timely manner for the covered services

  • Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.

  • Migration and consolidations of infrastructure

  • Design and implement major service and infrastructure components.

  • Research, investigate and define new areas of technology to enhance existing service or new service directions.

  • Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.

  • Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.

  • Ability to take ownership on activities and new initiatives.

  • Infra Global Support from India towards product Development teams.

  • On-call support on a rotational basis for a global turn-around time-zones

  • Vendor Management for all latest hardware and software evaluations keep the system up-to-date.

 

KNOWLEDGE, SKILLS AND ABILITIES:

  • Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.

  • Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations

  • IaaS- Infrastructure as a service, Metal as service, Platform service

  • Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies

  • Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)

  • DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms

  • Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment

  • Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.

  • Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect 

  • Networking Skills –   Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)

  • Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)

  • Experience with Red Hat Ceph storage.

  • A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies

  • SME - subject matter expert for all cutting-edge technologies

  • Data center architect professional & Storage Expert level Certified professional experience .

  • A solid understanding of high availability systems, redundant networking and multipathing solutions

  • Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.

  • A Working experience in Object – Block – File storage Technologies

  • Experience in Backup Technologies and backup administration.

  • Dell/HP/Cisco UCS server’s administration is an additional advantage.

  • Ability to quickly learn and adopt new technologies.

  • A very very story experience and exposure towards open-source flatforms.

  • A working experience on monitoring tools Zabbix, nagios , Datadog etc ..

  • A working experience on and BareMetal services and OS administration.

  • A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.

  • A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)

  • A working experience with systems engineering and Linux /Unix administration

  • A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL

  • A working experience with automation/configuration management using either Puppet, Chef or an equivalent

  • A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories

  • Experience in Build system process and Code-inspect and delivery methodologies.

  • Knowledge on creating Operational Dashboards and execution lane.

  • Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services

  • SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.

  • Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.

  • Experience in handling On-call Support and hierarchy process.

  • Knowledge on scale-out and scale-in architecture.

  • Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.

  • Knowledge on Agile and Scrum principles

  • Working experience with ServiceNow

  • Knowledge sharing, transition experience and self-learning Behavioral.

Read more
mavQ

at mavQ

6 recruiters
Harish Kumar Burukunta
Posted by Harish Kumar Burukunta
Hyderabad
7 - 12 yrs
₹10L - ₹15L / yr
skill iconSpring Boot
skill iconJava
skill iconAngular (2+)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+8 more

mavQ is seeking a motivated Lead FullStack Developer to join our team. You will be an integral member of the Professional Services team dedicated to work on the recently acquired mavQ’s Electronic Research Administration products built on a REST based Java platform running in Tomcat. The work is about 70% implementation and new development and 30% maintenance.


Skills Required:

  • At least 7 years of experience with Frontend and Backend Technologies.
  • Experience in Java, Spring, API Integration, Angular, Frontend Development.
  • Ability to work with caching systems such as Redis.
  • Good understanding of cloud computing & Distributed systems.
  • Experience in people management.
  • Capability of working within a budget of hours and completing projects by a deadline to appropriate quality standards
  • Communicate clearly and ask clarification questions, dig deeper.
  • Ability to work from visual and functional specifications
  • Work with a positive attitude even when circumstances may be unfavorable
  • Understand RESTful APIs, MVC concepts, and how to effectively use SVC systems
  • Ability to work effectively in a team of developers and representatives of other functional groups, such as design
  • ​​Good Experience in CI/CD, Kubernetes, Test suits, Docker.
  • Good Understanding of the Command line.
  • Good with client Interaction.

 

Roles & Responsibilities:

  • Effective Problem Solving Skills. Assist the team with debugging & guidance when needed.
  • Responsible for managing a team of 8-10 developers.
  • Lead the product’s technical domain with full authority.
  • Assist the product & project management teams with setting timelines & priorities for feature development.
  • Drive good coding standards & development practices in the team. 
  • Own the code review process for the team & ensure high quality work.
  • Assist with system & architecture design & research into new technologies.
  • Be involved with client communications when needed from a product sales perspective. 

 

What we offer:

  • Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
    • Sum Insured: INR 5,00,000/- 
    • Maternity cover upto two children
    • Inclusive of COVID-19 Coverage
    • Cashless & Reimbursement facility
    • Access to free online doctor consultation

  • Personal Accident Policy (Disability Insurance) -
  • Sum Insured: INR. 25,00,000/- Per Employee
  • Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
  • Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
  • Temporary Total Disability is covered

  • An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver  benefit
  • Monthly Internet Reimbursement of upto Rs. 1,000 
  • Opportunity to pursue Executive Programs/ courses at top universities globally
  • Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others

***

Read more
Tech Vedika

at Tech Vedika

1 recruiter
Latha B
Posted by Latha B
Hyderabad
4 - 7 yrs
₹4L - ₹13L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description

  • Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
  • Will be client point of contact for High Priority technical issues and new requirements
  • Should act as Tech Lead and guide the junior members of team and mentor them
  • Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
  • Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
  • Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
  • Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security

Qualifications

  • Prefer to have at least 5+ years of IT experience implementing enterprise applications
  • Should be AWS Solution Architect Associate Certified
  • Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
  • Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
  • Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
  • Must have through experience in doing on prem to AWS cloud workload migration
  • Should be comfortable in using AWS and other migrations tools
  • Should have experience is working on AWS performance, Cost and Security optimisation
  • Should be experience in implementing automated patching and hardening of the systems
  • Should be involved in P1 tickets and also guide team wherever needed
  • Creating Backups and Managing Disaster Recovery
  • Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
  • Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
  • Experience with Docker, Bitbucket, ELK and deploying applications on AWS
  • Good understanding of Containerisation technologies like Docker, Kubernetes etc.
  • Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
  • Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
Read more
CloudAngle
Deepika DeepthiKollu
Posted by Deepika DeepthiKollu
Hyderabad
7 - 15 yrs
₹15L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
  • Provision Dev Test Prod Infrastructure as code using IaC (Infrastructure as Code)
  • Good knowledge on Terraform
  • In-depth knowledge of security and IAM / Role Based Access Controls in Azure, management of Azure Application/Network Security Groups, Azure Policy, and Azure Management Groups and Subscriptions.
  • Experience with Azure and GCP compute, storage and networking (we can also look for GCP )
  • Experience in working with ADLS Gen2, Databricks and Synapse Workspace
  • Experience supporting cloud development pipelines using Git, CI/CD tooling, Terraform and other Infrastructure as Code tooling as appropriate
  • Configuration Management (e.g. Jenkins, Ansible, Git, etc...)
  • General automation including Azure CLI, or Python, PowerShell and Bash scripting
  • Experience with Continuous Integration/Continuous Delivery models
  • Knowledge of and experience in resolving configuration issues
  • Understanding of software and infrastructure architecture
  • Experience in Paas, Terraform and AKS
  • Monitoring, alerting and logging tools, and build/release processes Understanding of computing technologies across Windows and Linux
Read more
Hyderabad
3 - 8 yrs
₹8L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Experienced with Azure DevOps, CI/CD and Jenkins.

Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.

Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.

Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.

Demonstrable experience with the following technologies:

Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.

Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.

Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.

Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.

Ability and desire to quickly pick up new technologies, languages, and tools

Excellent communication skills and Good team player.

Passionate about code quality and best practices is an absolute must

Must show evidence of your passion for technology and continuous learning

 

Read more
Remote, Pune, Mumbai, Bengaluru (Bangalore), Gurugram, Hyderabad
15 - 25 yrs
₹35L - ₹55L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
Architecture
skill iconPython
+5 more
  • 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
  • 15+ years of experience as a technical specialist in Customer-facing roles.
  • Ability to travel to client locations as needed (25-50%)
  • Extensive experience architecting, designing and programming applications in an AWS Cloud environment
  • Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
  • Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
  • Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
  • Agile software development expert
  • Experience with continuous integration tools (e.g. Jenkins)
  • Hands-on familiarity with CloudFormation
  • Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
  • Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
  • Strong practical application development experience on Linux and Windows-based systems
  • Extra curricula software development passion (e.g. active open source contributor)
Read more
Atmecs Ltd
Agency job
via Dangi Digital Media LLP by jaibir dangi
Hyderabad, Bengaluru (Bangalore), Coimbatore
4 - 8 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Must have Proficient exp of minimum 4 years into DevOps with at least one devops end to end project implementation.
Strong expertise on DevOps concepts like Continuous Integration (CI), Continuous delivery (CD) and Infrastructure as Code, Cloud deployments.
Minimum exp of 2.5-3 years of Configuration, development and deployment with their underlying technologies including Docker/Kubernetes and Prometheus.
Should have implemented an end to end devops pipeline using Jenkins or any similar framework.
Experience with Microservices architecture.
Sould have sound knowledge in branching and merging strategies.
Experience working with cloud computing technologies like Oracle Cloud *(preferred) /GCP/AWS/OpenStack
Strong experience in AWS/Azure/GCP/open stack , deployment process, dockerization.
Good experience in release management tools like JIRA or similar tools.
Good to have Knowledge of Infra automation tools Terraform/CHEF/ANSIBLE (Preferred)
Experience in test automation tools like selenium/cucumber/postman
Good communication skills to present devops solutions to the client and drive the implementation.
Experience in creating and managing custom operational and monitoring scripts.
Good knowledge in source control tools like Subversion, Git,bitbucket, clearcase.
Experience in system architecture design
Read more
Looking for candidates -Java - J2EEE lead
Hyderabad
10 - 12 yrs
₹25L - ₹30L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Spring
+1 more

Job Description

Who are we looking for?

A senior level Java-J2EEE lead to manage a critical project for one of the biggest clients in banking domain. The Individual should be passionate about technology, experienced in developing and managing cutting edge technology applications.
We are looking for people from the trading background

Technical Skills: 

  • An excellent tech Lead or application architect with strong experience in monolithic Java legacy applications, modern cloud-native
  • Strong hands-on experience in Spring, Core Java specifically on multi-threading, concurrency, memory management process, and fair understanding on network communication & protocols. 
  • experience working in software development on low-latency and high performing systems
  • Experienced to guide & mentor the offshore team members, validating application deliverables on regular basis
  • Ability to work in a collaborative manner with peers across different time zones. 
  • Passionate about good design and code quality and have strong engineering practices
  • Experience working on GCP will be preferred. 

Process Skills: 

  • Experience in analyzing requirements and develop software as per project defined software process
  • Develop and review design, code
  • Develop and document architecture framework, technical standards, and application roadmap
  • Guide development teams to comply with the architecture and development standards and ensure quality application is designed, developed, and delivered
  • Must   have excellent communication skills.

Behavioral Skills:

  • Resolve technical issues of projects and Explore alternate designs
  • Effectively collaborates and communicates with the stakeholders and ensure client satisfaction
  • Mentor, Train and coach members of project groups to ensure effective knowledge management activity.

Certification:

  • Ultimate sun Java and GCP Certified 
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Pooja Singh
Posted by Pooja Singh
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Hyderabad, Pune
4 - 19 yrs
₹1L - ₹15L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Microservices
+7 more
  • Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
  • Experience in Programming Language: Java 8, Javascript
  • Experience in Microservice Development or Architecture
  • Experience with Web Application Frameworks: Spring or Springboot or Micronaut
  • Designing: High Level/Low-Level Design
  • Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
  • Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
  • Experience on one or more Database: RDBMS or NoSQL
  • Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
  • Security (Authentication, scalability, performance monitoring)
Read more
Quark Software

at Quark Software

2 recruiters
Tarun M
Posted by Tarun M
Hyderabad
3 - 4 yrs
₹3L - ₹5L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Roles & Responsibilities
Implementing various development, testing, automation tools, and IT infrastructure
Planning the team structure, activities, and involvement in project management activities.
Managing stakeholders and external interfaces
Setting up tools and required infrastructure
Defining and setting development, test, release, update, and support processes for DevOps operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Troubleshooting techniques and fixing the code bugs
Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
Encouraging and building automated processes wherever possible
Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
Incidence management and root cause analysis
Coordination and communication within the team and with customers
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Mentoring and guiding the team members
Monitoring and measuring customer experience and KPIs
Managing periodic reporting on the progress to the management and the customer
Read more
Product Based Company
Agency job
via Jobdost by Sathish Kumar
Remote, Hyderabad
2 - 5 yrs
₹6L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more
Responsibilities Include
• Support software build and release efforts:
• Create, set up, and maintain builds
• Review build results and resolve build problems
• Create and Maintain build servers
• Plan, manage, and control product releases
• Validate, archive, and escrow product releases
• Maintain and administer configuration management tools, including source control, defect management, project management, and other systems.
• Develop scripts and programs to automate process and integrate tools.
• Resolve help desk requests from worldwide product development staff.
• Participate in team and process improvement projects.
• Interact with product development teams to plan and implement tool and build improvements.
• Perform other duties as assigned.

While the job description describes what is anticipated as the requirements of the position, the job requirements are subject to change based upon any changing needs and requirements of the business.

Required Skills
• TFS 2017 vNext Builds or AzureDevOps Builds Process
• Must to have PowerShell 3.0+ Scripting knowledge
• Exposure on Build Tools like MSbuild, NANT, XCode.
• Exposure on Creating and Maintaining vCenter/VMware vSphere 6.5
• Hands On experiences on above Win2k12 OS and basic info on MacOS
• Good to have Shell or Batch Script (optional)

Required Experience

Candidates for this position should hold the following qualifications to be considered as a suitable applicant. Please note that except where specified as “preferred,” or as a “plus,” all points listed below are considered minimum requirements.
• Bachelors Degree in a related discipline is strongly preferred
• 3 or more years experience with Software Configuration Management tools, concepts, and processes.
• Exposure to Source control systems such as TFS, GIT, or Subversion (Optional)
• Familiarity with object-oriented concepts and programming in C# and Power Shell Scripting.
• Experience working on AzureDevOps Builds or vNext Builds or Jenkins Builds
• Experience working with developers to resolve development issues related to source control systems.
Read more
Chennai, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
9 - 16 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+9 more

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
Read more
Remote, Bengaluru (Bangalore), Chennai, Pune, Hyderabad, Mumbai
3 - 10 yrs
₹8L - ₹28L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

Experience: 3+ years of experience in Cloud Architecture

About Company:

The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.



Cloud Architect / Lead

  • Role Overview
    • Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
  • Key Knowledge
    • 3-5+ years of experience in AWS/GCP or Azure technologies
    • Is likely certified on one or more of the major cloud platforms
    • Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
    • Ability to guide and lead internal agile teams on cloud technology
    • Background from the financial services industry or similar critical operational experience
 
Read more
Hyderabad
6 - 10 yrs
₹15L - ₹24L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+4 more
We are a tech venture which provides Product Engineering, QA Automation, Infrastructure, Data, and Market Research services.

Technical Proficiency : 
Must have

  • Strong development experience in Python in the environment of Unix/Linux/Ubuntu
  • Strong practical knowledge of Python and its libraries.
  • Current working experience with cloud deployment of AWS/Azure/GCP, Microservice architecture, and Docker in Python.
  • Good knowledge of CI/CD and DevOps practices
  • Good Experience of Python with Django/ Scrapy/ Flask frameworks.
  • Good Experience in Jupyter/ Docker/ Elastic Search, etc.
  • Solid understanding of software development principles and best practices.
  • Strong analytical thinking and problem-solving skills.
  • Proven ability to drive large-scale projects with a deep understanding of Agile SDLC, high collaboration, and leadership.

Good to have : 
  • Expected to have migration experience from one version to the other, as this project is about migration to the latest version.
  • Preferred if had an OpenEdx platform experience or any LMS platform. 
Read more
Hyderabad
3 - 10 yrs
₹8L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+2 more

We are a tech venture which provides Product Engineering, QA Automation, Infrastructure, Data, and Market Research services.

Technical Proficiency : 
Must have : 

  • Strong development experience in Python in the environment of Unix/Linux/Ubuntu

  • Strong practical knowledge of Python and its libraries.

  • Current working experience with cloud deployment of AWS/Azure/GCP, Microservice architecture, and Docker in Python.

  • Good knowledge of CI/CD and DevOps practices

  • Good Experience of Python with Django/ Scrapy/ Flask frameworks.

  • Good Experience in Jupyter/ Docker/ Elastic Search, etc.

  • Solid understanding of software development principles and best practices.

  • Strong analytical thinking and problem-solving skills.

  • Proven ability to drive large-scale projects with a deep understanding of Agile SDLC, high collaboration, and leadership.

    Good to have : 

  • Expected to have migration experience from one version to the other, as this project is about migration to the latest version.

  • Preferred if had an OpenEdx platform experience or any LMS platform. 

     

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sayali Kachi
Posted by Sayali Kachi
Pune, Hyderabad
2 - 6 yrs
₹3L - ₹15L / yr
Google Cloud Platform (GCP)
SQL
BQ

Datametica is looking for talented Big Query engineers

 

Total Experience - 2+ yrs.

Notice Period – 0 - 30 days

Work Location – Pune, Hyderabad

 

Job Description:

  • Sound understanding of Google Cloud Platform Should have worked on Big Query, Workflow, or Composer
  • Experience in migrating to GCP and integration projects on large-scale environments ETL technical design, development, and support
  • Good SQL skills and Unix Scripting Programming experience with Python, Java, or Spark would be desirable.
  • Experience in SOA and services-based data solutions would be advantageous

 

About the Company: 

www.datametica.com

Datametica is amongst one of the world's leading Cloud and Big Data analytics companies.

Datametica was founded in 2013 and has grown at an accelerated pace within a short span of 8 years. We are providing a broad and capable set of services that encompass a vision of success, driven by innovation and value addition that helps organizations in making strategic decisions influencing business growth.

Datametica is the global leader in migrating Legacy Data Warehouses to the Cloud. Datametica moves Data Warehouses to Cloud faster, at a lower cost, and with few errors, even running in parallel with full data validation for months.

Datametica's specialized team of Data Scientists has implemented award-winning analytical models for use cases involving both unstructured and structured data.

Datametica has earned the highest level of partnership with Google, AWS, and Microsoft, which enables Datametica to deliver successful projects for clients across industry verticals at a global level, with teams deployed in the USA, EU, and APAC.

 

Recognition:

We are gratified to be recognized as a Top 10 Big Data Global Company by CIO story.

 

If it excites you, please apply.

Read more
Pune, Hyderabad, Gandhinagar
6 - 12 yrs
₹5L - ₹21L / yr
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
DevOps
Windows Azure
+9 more

Key Skills Required:

 

·         You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services. 

·         You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers. 

·         You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.

·         You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application

 

To be the right fit, you'll need:

 

·         Expert in Cloud Services like AWS.

·         Experience in Terraform Scripting.

·         Experience in container technology like Docker and orchestration like Kubernetes.

·         Good knowledge of frameworks such as JenkinsCI/CD pipeline, Bamboo Etc.

·         Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)

Read more
MNC Company - Product Based
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 9 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
Google Cloud Platform (GCP)
+2 more

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
HitWicket

at HitWicket

1 recruiter
Vidhushi Gogu
Posted by Vidhushi Gogu
Hyderabad
4 - 10 yrs
₹6L - ₹15L / yr
Game development
Game Developer
skill iconMongoDB
Unity 3D
Google Cloud Platform (GCP)
+5 more
Role & Responsibilities: 

A strong technologist at Hitwicket cares about code modularity, scalability and reusability and thrives in a complex and ambiguous environment.

1. Design and develop game features from prototype to full implementation
2. Have Strong Unity 3D programming skills on C#, JavaScript to craft cutting edge immersive
mobile gaming experience
3. Understand cross-platform development and client server applications to give Players a
seamless and fun mobile gaming experience
4. Implement game functionality by translating design ideas, concepts, and requirements into
functional and engaging features in the game
5. Get involved in all areas of game development including Graphics, Game Logic, and User
Interface
6. Take ownership of the game and work closely with the Product and Design teams
7. Help maintain code quality by writing robust code to be used by millions of users
8. Develop crazy new experiences for our players and think outside the box to deliver
something new
9. Be proactive, support and contribute new ideas to game design.
10. Work experience of 4 - 10 years is preferable
11. Experience of team management is a plus
12. A broad skill set is a strength! We are a small team, so cross discipline skills are highly valued.

What we offer you?

- Casual dress every single day
- 5 days a week
- Well stocked pantry
- Work with cool people and delight millions of gamers across the globe
- ESOPs and other incentives

About Us:

What is Hitwicket?

Hitwicket is a strategy-based cricket game played by over 3 million diverse groups of players across the world!! Our mission is to build the world’s best cricket game & be the first mobile Esports IP from India!!  

We’re a Series A funded Technology startup based in Hyderabad and co-founded by VIT alumni. We are backed by Prime Venture Partners, one of India’s oldest and most successful venture funds - https://primevp.in/">https://primevp.in/


Hitwicket Superstars won the First prize in Prime Minister’s AatmaNirbhar Bharat App Innovation Challenge, a nation-wide contest to identify the top homegrown startups who are building for the Global market; Made in India, for India & the World!

What is next?

With the phenomenal success of our Cricket Game, we are now entering into the world of Football, NFTs & Blockchain gaming! We are assembling a team to join us on our mission to make something as massive as PUBG or Clash of Clans from India. - 

How are we unique?

  • 1st Place (Gaming) - Prime Minister’s Atma Nirbhar App Challenge
  • Selected among the 'Top Gaming Startups in Asia' for the inaugural batch of Google Games Accelerator program 2018 in Singapore
  • Selected among the 'Global Top 10 Startups in Sports Innovation' by the HYPE, UK, competition was held at the University Olympic Games in Taiwan
  • "Best App" Award presented by the IT Minister at HYSEA Annual Product Sumit

Our work culture is driven by speed, innovation and passion, not by hierarchy. Our work philosophy is centered around building a company like a 'Sports Team' where each member has an important role to play in the success of the company as a whole.

It doesn't matter if you are not a cricket fan, or a gamer, what matters to us are your Problem-solving skills and your Creativity.

Join us on this EPIC journey and make memories for a lifetime!

How to know more about us?

Watch us:

https://youtu.be/Q3arlzDf2KY">Hitwicket Superstars Launch - IGDC 2022 - https://yourstory.com/2022/10/keerti-singh-kashyap-reddy--india-cricket-web3-hitwicket">YourStoryhttps://play.google.com/console/about/weareplay/">Hitwicket Story - https://youtu.be/K16vvQt6sAM">Ms Keerti Interview on NDTV

How to get connected with us?

https://discord.gg/aTnZswv9">Hitwicket Discord Global Community - https://youtube.com/channel/UCQ_0a_kBzap0nJySx668Ekg">Youtube - https://www.linkedin.com/company/metasports-media/">Linkedin - https://twitter.com/HitwicketGame">Twitter - https://instagram.com/hitwicketsuperstars">Instagram - https://m.facebook.com/HitwicketSuperstarsCricketGame">Facebook - https://www.reddit.com/r/hitwicketsuperstars?utm_medium=android_app&;utm_source=share">Reddit - https://superstars-35181.medium.com/">Medium

 

Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
EngageBay

at EngageBay

1 recruiter
Sreedhar Ambati
Posted by Sreedhar Ambati
Hyderabad
3 - 10 yrs
₹12L - ₹35L / yr
skill iconJava
skill iconHTML/CSS
skill iconJavascript
Fullstack Developer
Google Cloud Platform (GCP)
+2 more

Job Responsibilities:


  • Be a part of a team developing a rapidly growing product used by thousands of small businesses.
  • Responsible for building large scale applications that are high performance, scalable, and resilient in an microservices environment
  • Evaluate new technologies and tools such as new frameworks, methodologies, best practices and other areas that will improve overall efficiencies and product quality

Skill Set:


  • 3+ years of working experience in developing API centric core Java/J2EE applications using Spring boot, JPA, REST API, XML and JSON
  • Experience in frontend framework - any one of Backbone, Angular, Vue or React
  • Working experience in any one of the Cloud Platforms - Google Cloud (preferable) or AWS
  • Experience in large scale databases - NoSQL and MongoDB.
  • Hands on experience in Eclipse based development and using Git, SVN, Junit
  • Past experience in startups or product development is a big plus.
Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Aureus Tech Systems

at Aureus Tech Systems

3 recruiters
Krishna Kanth
Posted by Krishna Kanth
Hyderabad, Bengaluru (Bangalore), Chennai, Visakhapatnam, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 14 yrs
₹18L - ₹25L / yr
skill icon.NET
skill iconC#
ASP.NET
Web API
LINQ
+3 more

Title : .Net Developer with Cloud 

Locations: Hyderabad, Chennai, Bangalore, Pune and new Delhi (Remote).

Job Type: Full Time


.Net Job Description:

Required experience on below skills:

Azure experienced (Mandatory)
.Net programming (Mandatory)
DevSecOps capabilities (Desired)
Scripting skills (Desired)
Docker (Desired)
Data lake management (Desired)
  . Minimum of 5+ years application development experience 

. Experience with MS Azure: App Service, Functions, Cosmos DB and Active Directory

· Deep understanding of C#, .NET Core, ASP.NET Web API 2, MVC

· Experience with MS SQL Server

· Strong understanding of object-oriented programming

· Experience working in an Agile environment.

· Strong understanding of code versioning tools such as Git or Subversion

· Usage of automated build and/or unit testing and continuous integration systems

· Excellent communication, presentation, influencing, and reasoning skills.

· Capable of building relationships with colleagues and key individuals.

. Must have capability of learning new technologies.

Edited
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
MTX

at MTX

2 recruiters
Sinchita S
Posted by Sinchita S
Hyderabad
7 - 10 yrs
₹38L - ₹56L / yr
DevOps
CI/CD
Google Cloud Platform (GCP)
skill iconPostgreSQL
skill iconJenkins
+7 more

MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.


Responsibilities:

  • Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
  • Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
  • Bring experience on Google Cloud Platform.
  • Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
  • Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
  • Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
  • Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
  • Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
  • Manage several streams of work concurrently
  • Understand how various systems work
  • Understand how IT operations are managed


What you will bring:

  • 5 years of work experience as a DevOps Engineer.
  • Must possess ample knowledge and experience in system automation, deployment, and implementation.
  • Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
  • Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git. 
  • Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.

What we offer:


  • Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
    • Sum Insured: INR 5,00,000/- 
    • Maternity cover upto two children
    • Inclusive of COVID-19 Coverage
    • Cashless & Reimbursement facility
    • Access to free online doctor consultation

  • Personal Accident Policy (Disability Insurance) -
  • Sum Insured: INR. 25,00,000/- Per Employee
  • Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
  • Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
  • Temporary Total Disability is covered

  • An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver  benefit
  • Monthly Internet Reimbursement of upto Rs. 1,000 
  • Opportunity to pursue Executive Programs/ courses at top universities globally
  • Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others

                                                       *******************

Read more
Hyderabad, Bengaluru (Bangalore), Pune, Chennai
8 - 12 yrs
₹7L - ₹30L / yr
DevOps
Terraform
skill iconDocker
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Job Dsecription:

 

○ Develop best practices for team and also responsible for the architecture

○ solutions and documentation operations in order to meet the engineering departments quality and standards

○ Participate in production outage and handle complex issues and works towards Resolution

○ Develop custom tools and integration with existing tools to increase engineering Productivity

 

 

Required Experience and Expertise

 

○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.

○ Deep understanding of Terraform with best practices & writing TF modules.

○ Hands-on experience of GCP  and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.

○ Deep understanding of Kernel, Networking and OS fundamentals

NOTICE PERIOD - Max - 30 days

Read more
SpringML

at SpringML

1 video
4 recruiters
Astha Deep
Posted by Astha Deep
Remote, Hyderabad
6 - 12 yrs
₹6L - ₹26L / yr
Google Cloud Platform (GCP)
skill iconPython

Hi All,

We are hiring!!

Company: SpringML India Pvt Ltd.

Role:Lead Data Engineer

Location: Hyderabad  

Website: https://springml.com/">https://springml.com/

 

 

About Company:

At SpringML, we are all about empowering the 'doers' in companies to make smarter decisions with their data. Our predictive analytics products and solutions apply machine learning to today's most pressing business problems so customers get insights they can trust to drive business growth.

We are a tight-knit, friendly team of passionate and driven people who are dedicated to learning, get excited to solve tough problems and like seeing results, fast. Our core values include placing our customers first, empathy and transparency, and innovation. We are a team with a focus on individual responsibility, rapid personal growth, and execution. If you share similar traits, we want you on our team.

What's the opportunity?

SpringML is looking to hire a top-notch Lead Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset.

As a Lead Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. 

In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.


Responsibilities:

  •  Ability to work as a member of a team assigned to design and implement data integration solutions.
  •  Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
  •  Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  •  Propose design solutions and recommend best practices for large scale data analysis

 

Skills:

  •  B.tech degree in computer science, mathematics or other relevant fields.
  •  6+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
  •  Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
  •  Proficient in big data/distributed computing frameworks such as Apache Spark, Kafka,
  •  Experience with Agile implementation methodology 
Read more
Searce Inc

at Searce Inc

64 recruiters
Ishita Awasthi
Posted by Ishita Awasthi
Hyderabad
4 - 6 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
DevOps
skill iconDocker

 

Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.

What we believe?

  • Best practices are overrated
      • Implementing best practices can only make one n ‘average’ .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How we work?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.

  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

Are you the one? Quick self-discovery test:

  • Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  • Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  • Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  • Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?

Your bucket of undertakings:

  • This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other
  • Be the first one to experiment on new age cloud offerings, help define the best practise as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
  • Continually augment skills and learn new tech as the technology and client needs evolve
  • Demonstrate knowledge of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability)
  • Use your experience in Google cloud platform, AWS or Microsoft Azure to build hybrid-cloud solutions for customers.
  • Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods.
  • Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
  • Define optimal design patterns and solutions for high availability and disaster recovery for applications
  • Participate in technical reviews of requirements, designs, code and other artifacts Identify and keep abreast of new technical concepts in AWS
  • Security, Risk and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance and related areas
  • Develop solutions architecture and evaluate architectural alternatives for private, public and hybrid cloud models, including IaaS, PaaS, and other cloud services
  • Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation • Maintain overall industry knowledge on latest trends, technology, etc. • • Contribute to DevOps development activities and complex development tasks
  • Act as a Subject Matter Expert on cloud end-to-end architecture, including AWS and future providers, networking, provisioning, and management

Accomplishment Set 

  • Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
  • Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
  • Strong service attitude and a commitment to quality. Highly organised and efficient. Confident working with others to inspire a high-quality standard.

Education, Experience, etc.

  • To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
  • 6 - 10 years of experience with at least 5 - 6 years of hands-on experience in Cloud Computing
  • (AWS/GCP/Azure) and IT operational experience in a global enterprise environment.
  • Good analytical, communication, problem solving, and learning skills.
  • Knowledge on programming against cloud platforms such as AWS and lean development methodologies.
Read more
Searce Inc

at Searce Inc

64 recruiters
Ishita Awasthi
Posted by Ishita Awasthi
Hyderabad
9 - 14 yrs
₹8L - ₹35L / yr
skill iconAmazon Web Services (AWS)
Migration
DevOps
VMWare
Data center migration
+6 more

Are you the one? Quick self-discovery test:

  1. Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  2. Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  3. Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?

So what are we looking for?

  • Experience in On-premises to AWS cloud Migration. 
  • Linux and Windows servers knowledge .
  • Application knowledge like Java, .net, Python, Ruby.
  • On-premises to Cloud migration assessment experience as a must .
  • Able to provide a detailed migration analysis report and present it to the customer.
  • Creative problem-solving skills and superb communication skills. 
  • Respond to technical queries / requests from team members and customers. 
  • Ambitious individuals who can work under their own direction towards agreed targets/goals. 
  • Ability to handle change and be open to it along with good time management and being able to work under stress. 
  • Proven interpersonal skills while contributing to team effort by accomplishing related results as needed. 
  • Maintain technical knowledge by attending educational workshops, reviewing publications.

Job Responsibilities:

  1. Managing  initiatives for migration and modernization in AWS cloud environment 
  2. Leads and builds Modernization architecture solution from (on-prem or VMWare) into modern platform (Cloud AWS) through modular design by understanding application components 
  3. Leads and SME in Modernization methodology and can lead the Design thinking workshop, method tailoring as Client environment and Client industry
  4. The 6 most common application migration strategies below required
    1. Re-host (Referred to as a “lift and shift.”)
    2. Re-platform (Referred to as “lift, tinker, and shift.”)
    3. Re-factor / Re-architect  
    4. Re-purchase
    5. Retire
    6. Retain ( Referred to as re-visit.)
  5. Application migration analysis experience like application compatibility on the cloud, Network, security support on cloud.

Qualifications:

  1. Is Education overrated? Yes. We believe so. But there is no way to locate you otherwise. So we might look for at least a Bachelor’s or Master's degree in engineering from a reputed institute or you should be programming from 12. 
    1. And the latter is better. We will find you faster if you specify the latter in some manner. Not just a degree, but we are not too thrilled by tech certifications too :)
  2. Architects with 10+ total and 6+ years of experience on Modernization applications and led Architecture initiatives on AWS Modernization.
  3. Managed and implemented at least 5 engagement modernizing client applications to AWS Cloud and on WebSphere and Java/J2EE or .NET.
  4. Experience on using DevOps tools during Modernization.
  5. Complete in-depth experience and knowledge of AWS as a product and its components.
  6. AWS certification would be preferred.
  7. Experience in Agile fundamentals and methodology.
Read more
Searce Inc

at Searce Inc

64 recruiters
Ishita Awasthi
Posted by Ishita Awasthi
Hyderabad
4 - 8 yrs
₹11L - ₹25L / yr
DevOps
skill iconJenkins
skill iconDocker
skill iconKubernetes
Ansible
+5 more

Are you the one? Quick self-discovery test:

  1. Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
  2. Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
  3. Compassion for customers: You listen more than you speak.  When you do speak, people feel the need to listen.
  4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?

Your bucket of undertakings:

This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.

  1. Be the first one to experiment on new-age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
  2. Continually augment skills and learn new tech as the technology and client needs evolve
  3. Use your experience in the Google cloud platform, AWS, or Microsoft Azure to build hybrid-cloud solutions for customers.
  4. Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud-based technology and methods.
  5. Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
  6. Participate in technical reviews of requirements, designs, code, and other artifacts
  7. Identify and keep abreast of new technical concepts in the google cloud platform
  8. Security, Risk, and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas.

Accomplishment Set 

  • Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility 
  • Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
  • Strong service attitude and a commitment to quality.
  • Highly organised and efficient.
  • Confident working with others to inspire a high-quality standard.

Experience :

  • 4-8 years experience in Cloud Infrastructure and Operations domains
  • Experience with Linux systems and/OR Windows servers
  • Specialize in one or two cloud deployment platforms: AWS, GCP
  • Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine, API Gateway, AppSync and ServiceMesh)
  • Experience in one or more scripting language-Python, Bash
  • Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
  • Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
  • DevOps Technologies (AWS DevOps, Jenkins, Git, Maven)
  • Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef, Packer
  • Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)

Education :

  1. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
  2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
  3. 3-8 years of experience with hands-on experience in Cloud Computing (AWS/GCP) and IT operational experience in a global enterprise environment.
  4. Good analytical, communication, problem solving, and learning skills.
  5. Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.
Read more
SpringML

at SpringML

1 video
4 recruiters
Sai Raj Sampath
Posted by Sai Raj Sampath
Remote, Hyderabad
4 - 9 yrs
₹12L - ₹20L / yr
Big Data
Data engineering
TensorFlow
Apache Spark
skill iconJava
+2 more
REQUIRED SKILLS:

• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.

• Must have strong programming experience in either Python or Java/J2EE.

• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.

• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.

• Presentation skills with a high degree of comfort speaking with management and developers

• The ability to work in a fast-paced, work environment

• Excellent communication, listening, and influencing skills

RESPONSIBILITIES:

• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.

• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.

• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.

• Implementing and optimizing cloud solutions for customers.

• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.

• Provide solutions to team members for complex scenarios.

• Promote good design and programming practices with various teams and subject matter experts.

• Ability to work on any product on the Google cloud platform.

• Must be hands-on and be able to write code as required.

• Ability to lead junior engineers and conduct code reviews



QUALIFICATION:

• Minimum B.Tech/B.E Engineering graduate
Read more
Dremio

at Dremio

4 recruiters
Maharaja Subramanian (CW)
Posted by Maharaja Subramanian (CW)
Remote, Bengaluru (Bangalore), Hyderabad
3 - 10 yrs
₹15L - ₹65L / yr
skill iconJava
skill iconC++
Microservices
Algorithms
Data Structures
+10 more

Be Part Of Building The Future

Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.

About the Role

The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.

Responsibilities & ownership

  • Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
  • Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
  • Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
  • Lead the team to solve complex and unknown problems 
  • Solve technical problems and customer issues with technical expertise
  • Design and deliver architectures that run optimally on public clouds like  GCP, AWS, and Azure
  • Mentor other team members for high quality and design 
  • Collaborate with Product Management to deliver on customer requirements and innovation
  • Collaborate with Support and field teams to ensure that customers are successful with Dremio

Requirements

  • B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
  • Fluency in Java/C++ with 8+ years of experience developing production-level software
  • Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
  • 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
  • Hands-on experience  in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
  • Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
  • Passion for learning and delivering using latest technologies
  • Ability to solve ambiguous, unexplored, and cross-team problems effectively
  • Hands on experience of working projects on AWS, Azure, and Google Cloud Platform 
  • Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud) 
  • Understanding of distributed file systems such as  S3, ADLS, or HDFS
  • Excellent communication skills and affinity for collaboration and teamwork
  • Ability to work individually and collaboratively with other team members
  • Ability to scope and plan solution for  big problems and mentors others on the same
  • Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
Read more
Dremio

at Dremio

4 recruiters
Kiran B
Posted by Kiran B
Hyderabad
6 - 12 yrs
₹20L - ₹40L / yr
Reliability engineering
Site reliability
DevOps
skill iconPython
CI/CD
+5 more

About the Role

Dremio’s SREs ensure that our internal and externally visible services have reliability and uptime appropriate to users' needs and a fast rate of improvement. You will be joining a newly formed team that will spearhead our efforts to launch a cloud service. This is an opportunity to join a very fast growth startup and help build a cloud service from the ground up.

Responsibilities and Ownership

  • Ability to debug and optimize code and automate routine tasks.
  • Evangelize and advocate for reliability practices across our organization.
  • Collaborate with other Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, monitoring/alerting, capacity planning and launch reviews.
  • Analyze and optimize our core product by developing and implementing reliability and performance practices.
  • Scale systems sustainably through automation and evolve systems by pushing for changes that improve reliability and velocity.
  • Be on-call for services that the SRE team owns.
  • Practice sustainable incident response and blameless postmortems.

Qualifications

  • 6+ years of relevant experience in the following areas: SRE, DevOps, Cloud Operations, Systems Engineering, or Software Engineering.
  • Excellent command of cloud services on AWS/GCP/Azure, Kubernetes and CI/CD pipelines.
  • Have moderate-advanced experience in Java, C, C++, Python, Go or other object-oriented programming languages.
  • You are Interested in designing, analyzing and troubleshooting large-scale distributed systems.
  • You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
  • You have a great ability to debug and optimize code and automate routine tasks.
  • You have a solid background in software development and architecting resilient and reliable applications.
Read more
Dremio

at Dremio

4 recruiters
Kiran B
Posted by Kiran B
Hyderabad, Bengaluru (Bangalore)
15 - 20 yrs
Best in industry
skill iconJava
Data Structures
Algorithms
Multithreading
Problem solving
+7 more

About the Role

The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for technical leaders with passion and experience in architecting and delivering high-quality distributed systems at massive scale.

Responsibilities & ownership

  • Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
  • Lead and mentor others about concurrency, parallelization to deliver scalability, performance and resource optimization in a multithreaded and distributed environment
  • Propose and promote strategic company-wide tech investments taking care of business goals, customer requirements, and industry standards
  • Lead the team to solve complex, unknown and ambiguous problems, and customer issues cutting across team and module boundaries with technical expertise, and influence others
  • Review and influence designs of other team members 
  • Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
  • Partner with other leaders to nurture innovation and engineering excellence in the team
  • Drive priorities with others to facilitate timely accomplishments of business objectives
  • Perform RCA of customer issues and drive investments to avoid similar issues in future
  • Collaborate with Product Management, Support, and field teams to ensure that customers are successful with Dremio
  • Proactively suggest learning opportunities about new technology and skills, and be a role model for constant learning and growth

Requirements

  • B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
  • Fluency in Java/C++ with 15+ years of experience developing production-level software
  • Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models and their use in developing distributed and scalable systems
  • 8+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
  • Subject Matter Expert in one or more of query processing or optimization, distributed systems, concurrency, micro service based architectures, data replication, networking, storage systems
  • Experience in taking company-wide initiatives, convincing stakeholders, and delivering them
  • Expert in solving complex, unknown and ambiguous problems spanning across teams and taking initiative in planning and delivering them with high quality
  • Ability to anticipate and propose plan/design changes based on changing requirements 
  • Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
  • Passion for learning and delivering using latest technologies
  • Hands-on experience of working projects on AWS, Azure, and GCP 
  • Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure,  and GCP) 
  • Understanding of distributed file systems such as  S3, ADLS or HDFS
  • Excellent communication skills and affinity for collaboration and teamwork

 

Read more
statnetics

at statnetics

3 recruiters
CloudQA Dev
Posted by CloudQA Dev
Hyderabad
0 - 2 yrs
₹3L - ₹5L / yr
skill icon.NET
ASP.NET
skill iconJavascript
ASP.NET MVC
skill iconAmazon Web Services (AWS)
+8 more

CloudQA is a bootstrapped SaaS startup based out of Hyderabad. Our web application testing tools have helped companies ensure their customers get the best digital experiences.

You would be working on a world class product in a technically strong team competing with ex-google teams and silicon valley startups. CloudQA is used by some of the leading enterprises, govt institutions and rising startups across the world.We seek a strong technical background in both front-end and back-end development. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment.

Developer Requirements and Qualifications:

  1. Good communication skills, should able to demonstrate website features
  2. Must have good hands-on experience on HTML/CSS and Javascript,C#,SQL, selenium framework and Docker
  3. Good to have experience with vue.js, AngularJS and code repositories such as GitLab
  4. Good to have experience on AWS, GCP Cloud Service Providers and kubernetes
  5. Understanding of testing frameworks

Duties and Responsibilities:

  1. Collaborate with other developers and engineers to design, build, and maintain applications
  2. To Develop and maintain highly reliable and scalable web services.
  3. Understand product requirements, engage with team members and customers to define solutions and estimate the scope of work required.
  4. Develop and Deliver solutions that can keep up with a rapidly evolving product in a timely fashion.
  5. Improve the product features and performance in any way possible
  6. Write effective APIs
  7. Writing javascript/jqury programmatically to make it work with html components.
  8. Working with git repositories and CICD
Read more
statnetics

at statnetics

3 recruiters
CloudQA Dev
Posted by CloudQA Dev
Hyderabad
0 - 1 yrs
₹1L - ₹2L / yr
skill iconJavascript
ASP.NET
skill icon.NET
skill iconHTML/CSS
SQL server
+5 more
CloudQA is a bootstrapped SaaS startup based out of Hyderabad. Our web application testing tools have helped companies ensure their customers get the best digital experiences.

You would be working on a world class product in a technically strong team competing with ex-google teams and silicon valley startups. CloudQA is used by some of the leading enterprises, govt institutions and rising startups across the world.We seek a strong technical background in both front-end and back-end development. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment.

Developer Requirements and Qualifications:
  1. Good communication skills, should able to demonstrate website features
  2. Must have good hands-on experience on HTML/CSS and Javascript,C#
  3. Experience using SQL to update and retrieve data
  4. Good to have experience with vue.js, AngularJS, selenium frameworks and code repositories such as GitLab
  5. Analytical skills
  6. Understanding of testing frameworks
Duties and Responsibilities:
  1. Collaborate with other developers and engineers to design, build, and maintain applications
  2. Candidate is responsible for design and development of small modules in ASP.Net core MVC, C#, SQL Server and client side technology as javascript/jquery
  3. Write and debug code
  4. Troubleshoot application issues
  5. Ability to learn and pick up new frameworks
Read more
Aqua Security

at Aqua Security

4 recruiters
Rajeshwari
Posted by Rajeshwari
Remote, Hyderabad
7 - 22 yrs
Best in industry
DevOps
CI/CD
devsecops
skill iconAmazon Web Services (AWS)
Partnership and Alliances
+5 more
Descreption:

As a Partner Development Solution Architect focused on GSI partners within Aqua Security, you will have the opportunity to deliver on a strategy to build mind share and broad use of Aqua Platform across the partner community. Your broad responsibilities will include: owning the technical engagement with strategic partners, position aqua to be part of partner offerings, and assist with the creation of new technical strategies to help partners build and increase their application security practice business. You will be responsible for providing subject-matter expertise on the security of running cloud native workloads, which are rapidly being adopted in enterprise deployments. You will also drive technical relationships with all stakeholders and support sales opportunities. You will also work closely with the internal sales and partner sales team throughout the sales process to ensure all of the partners’ technical needs are understood and met with the best possible solution.


Responsibilities:

The ideal person will have excellent communications skills and be able to translate technical requirements for a non-technical audience. This person can multi-task, is self-motivated, while still interacting well with a team; is highly organized with high energy level and can-do attitude. Required skills include:

  • Experience as a sales engineer or solution architect, working with enterprise software products or services.
  • Ability to assess partner and customer requirements, identify business problems, and demonstrate proposed solutions.
  • Ability to present at technical meetups.
  • Ability to work with partners and conduct technical workshops
  • Recent familiarity or hands-on experience with:

- Linux distributions, Windows Server

- Networking configurations, routing, firewalling

- DevOps eco-system: CI/CD tools, datacenter automation, open source tools like Jenkins

- Cloud computing environments (AWS, Azure, and Google Compute)

- Container technologies like Docker, Kubernetes, OpenShift and Mesos

-Knowledge of general security practices & DevSecOps

  • Up to 25% travel is expected. The ideal candidate will be located in Hyderabad, India


Requirements:

  • 7+ years of hands on implementation or consulting experience
  • 3+ years in a customer and or partner facing roles
  • Experience working with end users or developer communities
  • Experience working effectively across internal and external organizations
  • Knowledge of the software development lifecycle
  • Strong verbal and written communications
  • BS degree or equivalent experience required
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort