Cutshort logo
Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru)

50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)

Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Pune, Hyderabad, Gurugram, Noida
5 - 11 yrs
₹20L - ₹36L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+7 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.


Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

• Data Integration, Processing & Governance

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Implement scalable architectural models for data processing and storage

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode

• Build functionality for data analytics, search and aggregation

Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 5+ years of IT experience with 3+ years in Data related technologies

2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc

6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Cloud data specialty and other related Big data technology certifications


Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes


Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
Bengaluru (Bangalore)
5 - 11 yrs
Best in industry
skill iconJavascript
skill iconAmazon Web Services (AWS)
Windows Azure
skill iconJava
skill iconReact.js
+3 more

About DataGrokr:

DataGrokr (www.datagrokr.com) is a cloud-native technology consulting organization providing the next generation of data management, cloud and enterprise solutions. We solve complex technology problems for our global clients who rely on us for our deep technical knowledge and delivery excellence.

If you are unafraid of technology, believe in your learning ability and are looking to work amongst smart, driven colleagues whom you can look up to and learn from, you might want to check us out.


About the Role:

(Job location – Bangalore)

Job Overview:

We are seeking a highly skilled and experienced Lead Full Stack Developer to join our team. The ideal candidate will be responsible for building complex applications and leading a full-stack development team. He/she will work closely with our development team and stakeholders to develop innovative solutions and drive technical excellence to build high-quality, scalable, and responsive web applications.


Roles and Responsibilities:

• Lead and manage a team of full-stack developers.

• Design and implement complex applications and architecture using modern software development practices.

• Collaborate with product managers, designers, and other stakeholders to understand project requirements and develop technical solutions that meet business needs.

• Provide technical leadership and mentorship to the development team, and conduct code reviews to ensure code quality and maintainability.

• Ensure that the codebase is scalable, maintainable, and of high quality.

• Optimize application performance and user experience.

• Stay up to date with the latest trends and best practices in both Frontend and Backend development and recommend new tools and technologies to improve the development process.

• Define and enforce coding standards, development methodologies, and best practices Desired Candidate Profile:

• Bachelor's degree in Computer Science & Engineering or a related field.

• At least 7 years of experience in Full Stack development

• Minimum of 5 years in a Lead/Architect role.

• Experience in designing and implementing complex applications and architecture.

• Minimum 4 years of experience in at least one Cloud Technology like AWS, GCP and Azure.

• Experience in Web Application Frameworks like Angular or React for frontend and NodeJS, Flask or Django for Backend.

• Strong knowledge of JavaScript, HTML, CSS, and other web technologies.

• Strong knowledge of any one OO Programming Language like Python, Java and C#.

• Experience with state management libraries such as Redux, MobX or Zustand.

• Solid understanding of Security Best Practices.

• Good understanding of Testing – Unit, Integrated, Regression and E2E.

• Exposure to micro-services / serverless architecture is a plus

• Exposure to CICD tools like Azure DevOps, Jenkins, Gitlab CICD, etc.

• Write well-designed, testable, efficient code by using best software development practices.

• Active contribution to Open-Source Communities and Libraries. Benefits:

• You will work in an open culture that promotes commitment over compliance, individual responsibility over rules and bringing out the best in everyone.

• You will be actively encouraged to attain certifications, lead technical workshops and conduct meetups to grow your own technology acumen and personal brand.

• You will be groomed and mentored by senior leaders to take on positions of increased responsibility.


If you are a passionate and skilled Full Stack Developer with leadership experience and want to be part of a young, innovative and competent team, we encourage you to apply.

Read more
Three Dots

at Three Dots

2 recruiters
Akul Aggarwal
Posted by Akul Aggarwal
Bengaluru (Bangalore)
3 - 8 yrs
₹12L - ₹15L / yr
skill iconReact.js
skill iconJavascript
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconNodeJS (Node.js)
+4 more

Job Title: Senior Full Stack Engineer

Location: Bangalore

About threedots:

At threedots, we are committed to helping our customers navigate the complex world of secured credit financing. Our mission is to facilitate financial empowerment through innovative, secured credit solutions like Loans Against Property, Securities, FD & More. Founded by early members of Groww, we are a well funded startup with over $4M in funding from India’s top investors.


Role Overview:

The Senior Full Stack Engineer will be responsible for developing and managing our web infrastructure and leading a team of talented engineers. With a solid background in both front and back-end technologies, and a proven track record of developing scalable web applications, the ideal candidate will have a hands-on approach and a leader's mindset.


Key Responsibilities:

  • Lead the design, development, and deployment of our Node and ReactJS-based applications.
  • Architect scalable and maintainable web applications that can handle the needs of a rapidly growing user base.
  • Ensure the technical feasibility and smooth integration of UI/UX designs.
  • Optimize applications for maximum speed and scalability.
  • Implement comprehensive security and data protection.
  • Manage and review code contributed by the team and maintain high standards of software quality.
  • Deploy applications on AWS/GCP and manage server infrastructure.
  • Work collaboratively with cross-functional teams to define, design, and ship new features.
  • Provide technical leadership and mentorship to other team members.
  • Keep abreast with the latest technological advancements to leverage new tech and tools.

Minimum Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Minimum 3 years of experience as a full-stack developer.
  • Proficient in Node.js and ReactJS.
  • Experience with cloud services (AWS/GCP).
  • Solid understanding of web technologies, including HTML5, CSS3, JavaScript, and responsive design.
  • Experience with databases, web servers, and UI/UX design.
  • Strong problem-solving skills and the ability to make sound architectural decisions.
  • Proven ability to lead and mentor a tech team.

Preferred Qualifications:

  • Experience in fintech
  • Strong knowledge of software development methodologies and best practices.
  • Experience with CI/CD pipelines and automated testing.
  • Familiarity with microservices architecture.
  • Excellent communication and leadership skills.

What We Offer:

  • The opportunity to be part of a founding team and shape the company's future.
  • Competitive salary with equity options.
  • A creative and collaborative work environment.
  • Professional growth opportunities as the company expands.
  • Additional Startup Perks


Read more
Intellikart Ventures LLP
ramandeep intellikart
Posted by ramandeep intellikart
Bengaluru (Bangalore)
5 - 10 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Databases
+1 more

How You'll Contribute:

● Redefine Fintech architecture standards by building easy-to-use, highly scalable,robust, and flexible APIs

● In-depth analysis of the systems/architectures and predict potential future breakdown and proactively bring solution

● Partner with internal stakeholders, to identify potential features implementation on that could cater to our growing business needs

● Drive the team towards writing high-quality codes, tackle abstracts/flaws in system design to attain revved-up API performance, high code reusability and readability.

● Think through the complex Fintech infrastructure and propose an easy-to-deploy modular infrastructure that could adapt and adjust to the specific requirements of the growing client base

● Design and create for scale, optimized memory usage and high throughput performance.​


Skills Required:

● 5+ years of experience in the development of complex distributed systems

● Prior experience in building sustainable, reliable and secure microservice-based scalable architecture using Python Programming Language

● In-depth understanding of Python associated libraries and frameworks

● Strong involvement in managing and maintaining produc Ɵ on-level code with high volume API hits and low-latency APIs

● Strong knowledge of Data Structure, Algorithms, Design Patterns, Multi threading concepts, etc

● Ability to design and implement technical road maps for the system and components

● Bring in new software development practices, design/architecture innovations to make our Tech stack more robust

● Hands-on experience in cloud technologies like AWS/GCP/Azure as well as relational databases like MySQL/PostgreSQL or any NoSQL database like DynamoDB

Read more
Healthtech Startup
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
Google Cloud Platform (GCP)
bigquery

Description: 

As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights. 


Key Responsibilities: 

1. Team Leadership: 

a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management. 

b. Foster a culture of innovation, collaboration, and continuous learning within the team. 

2. Data Pipeline Development (Google Cloud Focus): 

a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.

b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP. 

3. Data Architecture and Optimization: 

a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently. 

b. Optimize data storage, processing, and retrieval for maximum 

performance and cost-effectiveness on GCP. 

4. Data Governance and Quality: 

a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues. 

5. Cross-functional Collaboration: 

a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights. 

b. Participate in discussions regarding data strategy and provide technical expertise. 

6. Documentation and Best Practices: 

a. Create and maintain documentation for data engineering processes, standards, and best practices. 

b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed. 


Qualifications 

● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field. 

● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform. 

● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage. 

● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java. 

● Excellent problem-solving and communication skills. 

● Leadership experience and the ability to manage and mentor a team.


Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Reqroots

at Reqroots

7 recruiters
Dhanalakshmi D
Posted by Dhanalakshmi D
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.

Experience: 4+ Yrs

Responsibilities:

• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.

• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization

• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity

• Collaborate continuously with the product development teams to implement CI/CD pipeline.

• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.


Mandatory Skills:

• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.

• Strong scripting skills (Java or Python) is a must.

• Experience with automation tools such as Ansible, Chef, Puppet etc.

• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle

• Hands-on working experience in developing or deploying microservices is a must.

• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.

• Knowledge about microservices hosted in leading cloud environments

• Experience with containerizing applications (Docker preferred) is a must

• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.

• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software


Mandatory Skills:

• Experience working with Secret management services such as HashiCorp Vault is desirable.

• Experience working with Identity and access management services such as Okta, Cognito is desirable.

• Experience with monitoring systems such as Prometheus, Grafana is desirable.


Educational Qualifications and Experience:

• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)

• 4 to 6 years of hands-on experience in server-side application development & DevOps

Read more
Emint
Agency job
via anzy global by Roshan Muniraj
HSR Layout, Bangalore
5 - 8 yrs
₹25L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

FINTECH CANDIDATES ONLY


About the job:


Emint is a fintech startup with the mission to ‘Make the best investing product that Indian consumers love to use, with simplicity & intelligence at the core’. We are creating a platformthat

gives a holistic view of market dynamics which helps our users make smart & disciplined

investment decisions. Emint is founded by a stellar team of individuals who come with decades of

experience of investing in Indian & global markets. We are building a team of highly skilled &

disciplined team of professionals and looking at equally motivated individuals to be part of

Emint. Currently are looking at hiring a Devops to join our team at Bangalore.


Job Description :


Must Have:


• Hands on experience on AWS DEVOPS

• Experience in Unix with BASH scripting is must

• Experience working with Kubernetes, Docker.

• Experience in Gitlab, Github or Bitbucket artifactory

• Packaging, deployment

• CI/CD pipeline experience (Jenkins is preferable)

• CI/CD best practices


Good to Have:


• Startup Experience

• Knowledge of source code management guidelines

• Experience with deployment tools like Ansible/puppet/chef is preferable

• IAM knowledge

• Coding knowledge of Python adds value

• Test automation setup experience


Qualifications:


• Bachelor's degree or equivalent experience in Computer Science or related field

• Graduates from IIT / NIT/ BITS / IIIT preferred

• Professionals with fintech ( stock broking / banking ) preferred

• Experience in building & scaling B2C apps preferred

Read more
APIwiz
Balaji Vijayan
Posted by Balaji Vijayan
Bengaluru (Bangalore)
3 - 7 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Linux/Unix
skill iconDocker
skill iconKubernetes
+1 more

Overview

Apiwiz (Itorix Inc) is looking for software engineers to join our team, grow with us, introduce us to new ideas and develop products that empower our users. Every day, you’ll work with team members across disciplines developing products for Apiwiz (Itorix Inc). You’ll interact daily with our product managers to understand our domain and create technical solutions that push us forward. We want to work with other engineers who bring knowledge and excitement about our opportunities.

You will impact major features and new product decisions as part of our remarkably high performing, collaborative team of engineers who thrive on the business impact of their work. With strong team support and significant freedom and self direction, you will experience the wealth of interesting, challenging problems that only a high growth startup can provide.


Roles & Responsibilities

  • Build, configure, and manage cloud compute and data storage infrastructure for multiple instances of AWS and Google Cloud Platform.
  • Manage VPCs, security groups, and user access to our various public cloud systems and services.
  • Develop processes and procedures for using cloud-based infrastructures, including, access key rotation, disaster recovery, and building new services.
  • Help the business control costs by categorizing and tagging assets running in the cloud.
  • Develop scripts and workflows to manage cloud computing systems
  • Provide oversight on log aggregation and application performance monitoring surrounding our production environments.

What we’re looking for

  • 2-3 years of experience in the provision, configuring, administrating, automating, monitoring, and supporting enterprise Cloud services
  • Strong experience in designing, building, maintaining and securing AWS resources for high-availability and production level systems and services
  • Familiar with Cloud concepts with practical hands-on experience on any Cloud Platform.
  • Hands-on experience with AWS services like Elastic Compute Cloud (EC2), Elastic Load-balancers, S3, Elastic File system, VPC, Route53, and IAM.
  • Providing 24/7 support for the application and Infrastructure support
  • Prior experience using infrastructure as a code software tool like Terraform.
  • Knowledge in software provisioning, configuration management, and application-deployment tools like Ansible.
  • Working knowledge of container technologies like Docker & Kubernetes cluster operations.
  • Familiarity with software automation tools Git, Jenkins, Code Pipeline, SonarQube


Read more
Young Pre Series A product start-up
Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹22L / yr
CI/CD
skill iconPython
Bash
skill iconRuby
skill iconJenkins
+6 more

we’d love to speak with you. Skills and Qualifications:

Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.

Proficiency in scripting languages such as Python, Bash, or Ruby.


Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.

Experience with cloud platforms such as AWS, Azure, or GCP.


Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.

Experience with version control systems such as Git.


Familiarity with Agile methodologies and practices.


Understanding of networking concepts and principles.


Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.


Good understanding of security and data protection principles.


Roles and responsibilities:

● Building and setting up new development tools and infrastructure

● Working on ways to automate and improve development and release processes

● Deploy updates and fixes

● Helping to ensure information security best practices

● Provide Level 2 technical support

● Perform root cause analysis for production errors

● Investigate and resolve technical issues

Read more
Kritter

at Kritter

3 recruiters
Tenzin Kalsang
Posted by Tenzin Kalsang
Bengaluru (Bangalore)
1 - 4 yrs
₹4L - ₹8L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

Objectives :

  • Building and setting up new development tools and infrastructure
  • Working on ways to automate and improve development and release processes
  • Testing code written by others and analyzing results
  • Ensuring that systems are safe and secure against cybersecurity threats
  • Identifying technical problems and developing software updates and ‘fixes’
  • Working with software developers and software engineers to ensure that development follows established processes and works as intended
  • Planning out projects and being involved in project management decisions


Daily and Monthly Responsibilities :


  • Deploy updates and fixes
  • Build tools to reduce occurrences of errors and improve customer experience
  • Develop software to integrate with internal back-end systems
  • Perform root cause analysis for production errors
  • Investigate and resolve technical issues
  • Develop scripts to automate visualization
  • Design procedures for system troubleshooting and maintenance


Skills and Qualifications :

  • Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
  • 3+ years of experience as a DevOps Engineer or similar software engineering role
  • Proficient with git and git workflows
  • Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
  • Working knowledge of databases and SQL
  • Problem-solving attitude
  • Collaborative team spirit
Read more
Affine Analytics

at Affine Analytics

1 video
1 recruiter
Santhosh M
Posted by Santhosh M
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹30L / yr
Data Warehouse (DWH)
Informatica
ETL
Google Cloud Platform (GCP)
Airflow
+2 more

Objective

Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles and Responsibilities:

  • Should be comfortable in building and optimizing performant data pipelines which include data ingestion, data cleansing and curation into a data warehouse, database, or any other data platform using DASK/Spark.
  • Experience in distributed computing environment and Spark/DASK architecture.
  • Optimize performance for data access requirements by choosing the appropriate file formats (AVRO, Parquet, ORC etc) and compression codec respectively.
  • Experience in writing production ready code in Python and test, participate in code reviews to maintain and improve code quality, stability, and supportability.
  • Experience in designing data warehouse/data mart.
  • Experience with any RDBMS preferably SQL Server and must be able to write complex SQL queries.
  • Expertise in requirement gathering, technical design and functional documents.
  • Experience in Agile/Scrum practices.
  • Experience in leading other developers and guiding them technically.
  • Experience in deploying data pipelines using automated CI/CD approach.
  • Ability to write modularized reusable code components.
  • Proficient in identifying data issues and anomalies during analysis.
  • Strong analytical and logical skills.
  • Must be able to comfortably tackle new challenges and learn.
  • Must have strong verbal and written communication skills.


Required skills:

  • Knowledge on GCP
  • Expertise in Google BigQuery
  • Expertise in Airflow
  • Good Hands on SQL
  • Data warehousing concepts
Read more
Quess Corp
Agency job
via Startup Login by Shreya Sanchita
Bengaluru (Bangalore)
5 - 17 yrs
₹20L - ₹40L / yr
skill iconJava
skill iconJavascript
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
+6 more

We have the below active job vacancies open with a global aerospace brand in Bangalore (Hebbal) & (Devanahalli).

 

Java Full Stack Developer (Product) | 5-17 Y | Bangalore (Hebbal) | WFO | Locals Only | F2F Must |

 

Role: Java Full Stack Developer

Work Model: Hybrid - Working (2/3 days)

Mode Of Interview: F2F @ CV Raman Nagar & Hebbal - Office Site

 

Work Sites: Until March 2024 Hebbal & thereafter from Devanahalli

 

Key Skills: Java, Full Stack, Microservices, Springboot, Spring, JavaScript, HTML/CSS, Angular, Cloud (Azure or AWS), DevOps, Database

 

Levels: (All roles demands high technical expertise and Individual contributions)

Associate Software Engineer: 5 - 8 Yrs

Senior Java Full Stack Developer: 8 -12 Y

Lead Java Full Stack Developer: 12 - 17 Y

 

Prefer applicants from Aerospace, Consumer Tech Products & Electronics, Automotive, Unicorn, D2C Brands, who can join in short notice (30 Days).

 

If this role excites you, please apply here

Read more
Toast
Sandeep Dhara
Posted by Sandeep Dhara
Remote, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.


At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.


About this roll* (Responsibilities) 

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Partner with development teams to improve services through rigorous testing and release procedures
  • Participate in system design consulting, platform management, and capacity planning
  • Create sustainable systems and services through automation and uplift
  • Balance feature development speed and reliability with well-defined service level objectives


Troubleshooting and Supporting Escalations:

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
  • Implement strategies to increase system reliability and performance through on-call rotation and process optimization
  • Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again


Do you have the right ingredients? (Requirements)


  • Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
  • Polyglot technologist/generalist with a thirst for learning
  • Deep understanding of cloud and microservice architecture and the JVM
  • Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
  • Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
  • Experience with cloud computing technologies ( AWS cloud provider preferred)



Bread puns are encouraged but not required

Read more
codersbrain

at codersbrain

1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Hyderabad, Pune, Noida, Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure

Golang Developer

Location: Chennai/ Hyderabad/Pune/Noida/Bangalore

Experience: 4+ years

Notice Period: Immediate/ 15 days

Job Description:

  • Must have at least 3 years of experience working with Golang.
  • Strong Cloud experience is required for day-to-day work.
  • Experience with the Go programming language is necessary.
  • Good communication skills are a plus.
  • Skills- Aws, Gcp, Azure, Golang
Read more
iamneo
Subha shree
Posted by Subha shree
Mumbai, Bengaluru (Bangalore), Chennai, Bhubaneswar
3 - 10 yrs
₹2L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Corporate Training
Technical Training
  • 5 to 8 years of prior experience in training or equivalent history of increasing the learning curve.

  • Open for Full-time / Part-time / Contract / Freelance.

  • Good in Java along with cloud technologies - AWS/Azure/GCP.

  • Familiarity with working with clients of all sizes in a professional set-up.

  • Excellent communication skills.

  • Detail-oriented.

  • Ability to thrive in a multi-tasking environment and adjust priorities on-the-fly.

  • Proven record of operating as an independent contributor.

  • Ability to deliver engaging presentations.

  • Attention to detail and good problem-solving skills.

  • Excellent interpersonal skills.

  • If you are passionate and meet these requirements and are obsessive about shaping the future of talent, we would love to hear back from you.
Read more
Seek

at Seek

1 recruiter
Siddharth Jaiswal
Posted by Siddharth Jaiswal
Remote, Bengaluru (Bangalore)
1 - 6 yrs
₹9L - ₹30L / yr
skill iconFlutter
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
Firebase
Google Cloud Platform (GCP)
About Seek
We are building a consumer-first rewards platform that brings personalised offers and rewards for every consumer.

This is a very early opportunity, you will be working with the Founding team to build Seek's first product and business from the ground up.

What will I do in this role?
- develop designs into high-perfomance Flutter apps for Android and iOS
- own and be responsible for performance, security and experience on the Seek mobile apps


What's in it for me?
- You'll be one of the earliest members in the team
- Experience how a startup is built in its early days
- Explore and acquire new skills along with building depth in your desired field of work

Required Skills and Interests
- Flutter
- Firebase
- Android/iOS Development
- AWS
- Full-stack preferred
Read more
Conviva

at Conviva

1 recruiter
Deepa S
Posted by Deepa S
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹28L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+9 more
  • 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code. 
  • Advanced Linux system administration abilities. 
  • Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus. 
  • Experience with managing production environments on AWS or GCP. 
  • Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus. 
  • Experience on any configuration management tools like Ansible, Puppet or Chef is a must. 
  • Experience in any one of the scripting languages: Shell, Python, etc. 
  • Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must. 
  • Solid understanding of SSL and DNS. 
  • Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc. 
  • Basic understanding of networking concepts.
  • Always adhere to security best practices.
  • Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.  
  • Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage. 

What you get to do 

  • Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments 
  • Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues 
  • Continuously improve Conviva SaaS services and infrastructure for availability, performance and security 
  • Implement security best practices – primarily patching of operating systems and applications 
  • Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
  • Participate in 12x7 on-call rotations 
  • Work with third party service/support providers for installations, support related calls, problem resolutions etc. 



Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+12 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities:

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Implement consistent observability, deployment and IaC setups
  • Patch production systems to fix security/performance issues
  • Actively respond to escalations/incidents in the production environment from customers or the support team
  • Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Participate in infrastructure security audits

 

Requirements:

  • At least 5 years of experience in handling/building Production environments in AWS.
  • At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
  • Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

 

Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Implement consistent observability, deployment and IaC setups
  • Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
  • Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Lead infrastructure security audits

 

Requirements

  • At least 7 years of experience in handling/building Production environments in AWS.
  • At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Experience in security hardening of infrastructure, systems and services.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.

 

 

Read more
Digitalshakha
Saurabh Deshmukh
Posted by Saurabh Deshmukh
Bengaluru (Bangalore), Mumbai, Hyderabad
1 - 5 yrs
₹2L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

Main tasks

  • Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
  • Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
  • Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
  • Implementation of installations of the solution especially in the container context
  • Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
  • Maintenance of the system installation documentation and implementation of trainings

Execution of internal software tests and support of involved teams and stakeholders

  • Hands on Experience with Azure DevOps.

Qualification profile

  • Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
  • Experience in software
  • Installation and administration of Linux and Windows systems including network and firewalling aspects
  • Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
  • Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
  • Server environments, especially application, web-and database servers
  • Knowledge in VMware/K3D/Rancer is an advantage
  • Good spoken and written knowledge of English


Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
6 - 12 yrs
₹20L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

Job Description:


• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,

Observability and Enabling the SRE activities

• Guide operations support (setup, configuration, management, troubleshooting) of

digital platforms and applications

• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.

• Deploy, configure, and manage SaaS and PaaS cloud platform and applications

• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)

• DevOps programming: writing scripts, building operations/server instance/app/DB

monitoring tools Set up / manage continuous build and dev project management

environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,

systems, and application architectures

• Collaborating with cross-functional teams to ensure secure product development

• Disaster recovery, network forensics analysis, and pen-testing solutions

• Planning, researching, and developing security policies, standards, and procedures

• Awareness training of the workforce on information security standards, policies, and

best practices

• Installation and use of firewalls, data encryption and other security products and

procedures

• Maturity in understanding compliance, policy and cloud governance and ability to

identify and execute automation.

• At Wesco, we discuss more about solutions than problems. We celebrate innovation

and creativity.

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
8 - 11 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more


Role Introduction

• This role involves guiding the DevOps team towards successful delivery of Governance and

toolchain initiatives by removing manual tasks.

• Operate toolchain applications to empower engineering teams by providing, reliable, governed

self-service tools and supporting their adoption

• Driving good practice for consumption and utilisation of the engineering toolchain, with a focus

on DevOps practices

• Drive good governance for cloud service consumption

• Involves working in a collaborative environment and focus on leading team and providing

technical leadership to team members.

• Involves setting up process and improvements for teams on supporting various DevOps tooling

and governing the tooling.

• Co-ordinating with multiple teams within organization

• Lead on handovers from architecture teams to support major project rollouts which require the

Toolchain governance DevOps team to operationally support tooling

What you will do

• Identify and implement best practices, process improvement and automation initiatives for

improvement towards quicker delivery by removing manual tasks

• Ensure best practices and process are documented for reusability and keeping up-to date on

good practices and standards.

• Re-usable automation and compliance service, tools and processes

• Support and management of toolchain, toolchain changes and selection

• Identify and implement risk mitigation plans, avoid escalations, resolve blockers for teams.

Toolchain governance will involve operating and responding to alerts, enforcing good tooling

governance by driving automation, remediating technical debt and ensuring the latest tools

are utilised and on the latest versions

• Triage product pipelines, performance issues, SLA/SLO breaches, service unavailable along

with ancillary actions such as providing access to logs, tools, environments.

• Involve in initial / detailed estimates during roadmap planning or feature

estimation/planning of any automation identified for a given toolset.

• Develop, refine, and tune integrations between various tools

• Discuss with Product Owner/team on any challenges from implementation, deployment

perspective and assist in arriving probable solution and escalate any risks to get them

resolved w.r.t DevOps toolchain.

• In consultation with Head of DevOps and other stake holders, prioritization of items, item-

task breakdown; accountable for squad deliverables for sprint

• Involve in reviewing current components and plan for upgrade and ensure its communicated

to wider audience within Organization

• Involve in reviewing access / role and enhance and automate provisioning.

• Identify and encourage areas for growth and improvement within the team e.g conducts

regular 1-2-1’s with squad members to provide support, mentoring and goal setting

• Involve in performance management ,rewards and recognition of team members, Involve in

hiring process.• Plan for upskill of team to know about tools and perform tasks. Ensure quicker onboarding

of new joiners/freshers to team to be productive.

• Review ticket metrics to measure the health of the project including SLAs and plan for

improvement.

• Requirement for on call for critical incidents that happen Out of Hours, based on tooling SLA.

This may include planning standby schedule for squad, carrying out retrospective for every

callout and reviewing SLIs/SLOs.

• Owns the tech/repair debt, risk and compliance for the tooling with respect to

infrastructure, pipelines, access etc

• Track optimum utilization of resources and monitor/track the delivery schedule

• Review solutions designs with the Architects / Principal DevOps Engineers as required

• Provide monthly reporting which align to DevOps Tooling KPIs

What you will have

• Candidate should have 8+ years of experience and Hands-on DevOps experience and

experience in team management.

• Strong communication and interpersonal skills, Team player

• Good working experience of CI/CD tools like Jenkins, SonarQube, FOSSA, Harness, Jira, JSM,

ServiceNow etc.

• Good hands on knowledge of AWS Services like EC2, ECS, S3, IAM, SNS, SQS, VPC, Lambda,

API Gateway, Cloud Watch, Cloud Formation etc.

• Experience in operating and governing DevOps Toolchain

• Experience in operational monitoring, alerting and identifying and delivering on both repair

and technical debt

• Experience and background in ITIL/ITSM processes. The candidate will ensure development

of the appropriate (ITSM) model and processes, based on the ITIL Service Management

framework. This includes the strategic, design, transition, and operation services and

continuous service improvement

• Provide ITSM leadership experience and coaching processes

• Experience on various tools like Jenkins, Harness, Fossa,

• Experience of hosting and managing applications on AWS/AZURE•

• Experience in CI/CD pipeline (Jenkins build pipelines)

• Experience in containerization (Docker/Kubernetes)

• Experience in any programming language (Node.js or Python is preferred)

• Experience in Architecting and supporting cloud based products will be a plus

• Experience in PowerShell & Bash will be a plus

• Able to self manage multiple concurrent small projects, including managing priorities

between projects

• Able to quickly learn new tools

• Should be able to mentor/drive junior team members to achieve desired outcome of

roadmap-

• Ability to analyse information to identify problems and issues, and make effective decisions

within short span

• Excellent problem solving and critical thinking

• Experience in integrating various components including unit testing / CI/CD configuration.

• Experience to review current toolset and plan for upgrade.

• Experience with Agile framework/Jira/JSM tool.• Good communication skills and ability to communicate/work independently with external

teams.

• Highly motivated, able to work proficiently both independently and in a team environment

Good knowledge and experience with security constructs –


Read more
Kloud9 Technologies
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹30L / yr
Google Cloud Platform (GCP)
PySpark
skill iconPython
skill iconScala

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


●    Overall 8+ Years of Experience in Web Application development.

●    5+ Years of development experience with JAVA8 , Springboot, Microservices and middleware

●    3+ Years of Designing Middleware using Node JS platform.

●    good to have 2+ Years of Experience in using NodeJS along with AWS Serverless platform.

●    Good Experience with Javascript / TypeScript, Event Loops, ExpressJS, GraphQL, SQL DB (MySQLDB), NoSQL DB(MongoDB) and YAML templates.

●    Good Experience with TDD Driven Development and Automated Unit Testing.

●    Good Experience with exposing and consuming Rest APIs in Java 8, Springboot platform and Swagger API contracts.

●    Good Experience in building NodeJS middleware performing Transformations, Routing, Aggregation, Orchestration and Authentication(JWT/OAUTH).

●    Experience supporting and working with cross-functional teams in a dynamic environment.

●    Experience working in Agile Scrum Methodology.

●    Very good Problem-Solving Skills.

●    Very good learner and passion for technology.

●     Excellent verbal and written communication skills in English

●     Ability to communicate effectively with team members and business stakeholders


Secondary Skill Requirements:

 

● Experience working with any of Loopback, NestJS, Hapi.JS, Sails.JS, Passport.JS


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Chennai, Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+5 more

6 - 12 years of professional experience with any of the below stacks:  

∙MERN stack: JavaScript - MongoDB - Express - ReactJS - Node,  

∙MEAN stack: JavaScript - MongoDB - Express - AngularJS - Node.js


Requirements:


∙Professional experience with JavaScript and associated web technologies (CSS, semantic HTML). 

∙Proficiency in the English language, both written and verbal, sufficient for success in a remote and largely asynchronous work environment. 

∙Demonstrated capacity to clearly and concisely communicate about complex technical, architectural, and/or organizational problems and propose thorough iterative solutions. 

∙Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent these problems. 

∙Comfort working in a highly agile software development process. 

∙Positive and solution-oriented mindset. 

∙Experience owning a project from concept to production, including proposal, discussion, and execution. 

∙Strong sense of ownership with the eagerness to design and deliver significant and impactful technology solutions. 

∙Demonstrated ability to work closely with other parts of the organization.

Read more
LiftOff Software India

at LiftOff Software India

2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Bengaluru (Bangalore)
3 - 6 yrs
₹1L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Devops Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

Must Have 

*Work experience of at least 2 years with Kubernetes.

*Hands-on experience working with Kubernetes. Preferably on Azure Cloud.

*Well-versed with Kubectl

*Experience in using Azure Monitor, setting up analytics and reports for Azure containers and services.

*Monitoring and observability

*Setting Alerts and auto-scaling

Nice to have

*Scripting and automation

*Experience with Jenkins or any sort of CI/CD pipelines

*Past experience in setting up cloud infrastructure, configurations and database backups

*Experience with Azure App Service

*Experience of setting up web socket-based applications.

*Working knowledge of Azure APIM



We are a group of passionate people driven by core values. We strive to make every process transparent and have flexible work timings along with excellent startup culture and vibe.

Read more
Seek

at Seek

1 recruiter
Siddharth Jaiswal
Posted by Siddharth Jaiswal
Remote, Bengaluru (Bangalore)
1 - 5 yrs
₹7L - ₹15L / yr
skill iconFlutter
Firebase
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
+7 more

About Seek

We are building a consumer-first rewards platform that brings personalised offers and rewards for every consumer.

This is a very early opportunity, you will be working with the Founding team to build Seek's first product and business from the ground up.

 

What will I do in this role?

- develop designs into high-perfomance Flutter apps for Android and iOS

- own and be responsible for performance, security and experience on the Seek mobile apps

 

What's in it for me?

- You'll be one of the earliest members in the team

- Experience how a startup is built in its early days

- Explore and acquire new skills along with building depth in your desired field of work

 

Required Skills and Interests

- Flutter

- Firebase

- Android/iOS Development

- AWS

- Full-stack experience preferred

Read more
Telus International
Vinay Shankar H S
Posted by Vinay Shankar H S
Bengaluru (Bangalore)
2 - 6 yrs
₹20L - ₹45L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Responsibilities

● Be a hands-on engineer, ensure frameworks/infrastructure built is well designed,

scalable & are of high quality.

● Build and/or operate platforms that are highly available, elastic, scalable, operable and

observable.

● Build/Adapt and implement tools that empower the TI AI engineering teams to

self-manage the infrastructure and services owned by them.

● You will identify, articulate, and lead various long-term tech vision, strategies,

cross-cutting initiatives and architecture redesigns.

● Design systems and make decisions that will keep pace with the rapid growth of TI AI.

● Document your work and decision-making processes, and lead presentations and

discussions in a way that is easy for others to understand.

● Available for on-call during emergencies to handle and resolve problems in a quick and

efficient manner.

 

 

 

Requirements

● 2+ years of Hands-on experience as a DevOps / Infrastructure engineer with AWS and

Kubernetes or similar infrastructure platforms. (preferably AWS)

● Hands-on with DevOps principles and practices ( Everything-as-a-code, CI/CD, Test

everything, proactive monitoring etc).

● Experience in building and operating distributed systems.

● Understanding of operating systems, virtualization, containerization and networks

preferable

● Hands-on coding on any of the languages like Python or GoLang.

● Familiarity with software engineering practices including unit testing, code reviews, and

design documentation.

● Strong debugging and problem-solving skills Curiosity about how things work and love to

share that knowledge with others.

 

Benefits :

● Work with a world class team working on a very forward looking problem

● Competitive Pay

● Flat hierarchy

● Health insurance for the family

Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Bengaluru (Bangalore)
7 - 12 yrs
₹35L - ₹45L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Please find the JD below: 

  • Candidate should have good Platform experience on Azure with Terraform. 
  • The devops engineer needs to help developers, create the Pipelines and K8s Deployment Manifests. 
  • Good to have experience on migrating data from (AWS) to Azure. 
  • To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms. 
  • VMs to be provisioned on Azure Cloud and managed. 
  • Good hands-on experience of Networking on Cloud is required. 
  • Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services. 
  • Kubernetes, Storage, KeyValult, Networking (load balancing and routing) and VMs are the key infrastructure expertise which are essential. 
  • Requirement is to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue green/canary deployment models etc). 
  • The experience in AWS is desirable. 
  • Python experience is optional however Power shell is mandatory. 
  • Know-how on the use of GitHub 
  • Administration of Azure Kubernetes services 
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Bengaluru (Bangalore)
8 - 12 yrs
₹30L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
Google Cloud Platform (GCP)

Sr Enterprise Software Architect with Cloud skills and preferably having either a GCP Associate or Professional Certification.

The requirement is to understand existing Enterprise Applications and help design a solution to enable Load balancing & Auto Scaling the application to meet certain KPIs.

Should be well versed with

  1. Designing and Deploying Large enterprise software in Cloud
  2. Understands Cloud fundamentals
  3. DevOps & Kubernetes
  4. Experience deploying cloud applications and monitoring operations
  5. Preferably Google Cloud
  6. Associate or Professional Certification.
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Bengaluru (Bangalore)
3 - 7 yrs
₹13L - ₹18L / yr
DevOps
skill iconKubernetes
skill iconDocker
Google Cloud Platform (GCP)
Puppet
+2 more
OPERATIONS ENGINEER(DevOps)
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
Read more
company logo
Agency job
via Molecular Connections by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more
Looking for a Kubernetes Engineer Freelancer to support our existing team. please review the list of responsibilities and qualifications.

Responsibility :

  • Install, configure, and maintain Kubernetes clusters.
  • Develop Kubernetes-based solutions.
  • Improve Kubernetes infrastructure.
  • Work with other engineers to troubleshoot Kubernetes issues.

Kubernetes Engineer Requirements & Skills

  • Kubernetes administration experience, including installation, configuration, and troubleshooting
  • Kubernetes development experience
  • Linux/Unix experience
  • Strong analytical and problem-solving skills
  • Excellent communication and interpersonal skills
  • Ability to work independently and as part of a team
Read more
company logo
Agency job
via Molecular Connections by Molecular Connections
Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

We are looking to fill the role of Kubernetes engineer.  To join our growing team, please review the list of responsibilities and qualifications.

Kubernetes Engineer Responsibilities

  • Install, configure, and maintain Kubernetes clusters.
  • Develop Kubernetes-based solutions.
  • Improve Kubernetes infrastructure.
  • Work with other engineers to troubleshoot Kubernetes issues.

Kubernetes Engineer Requirements & Skills

  • Kubernetes administration experience, including installation, configuration, and troubleshooting
  • Kubernetes development experience
  • Linux/Unix experience
  • Strong analytical and problem-solving skills
  • Excellent communication and interpersonal skills
  • Ability to work independently and as part of a team
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Jar

at Jar

1 video
2 recruiters
Nischal Hebbar
Posted by Nischal Hebbar
Remote, Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹30L / yr
skill iconJenkins
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
+2 more
  • 3 - 6 years of software development, and operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
  • Design cloud infrastructure that is secure, scalable, and highly available on AWS
  • Experience managing any distributed NoSQL system (Kafka/Cassandra/etc.)
  • Experience with Containers, Microservices, deployment and service orchestration using Kubernetes, EKS (preferred), AKS or GKE.
  • Strong scripting language knowledge, such as Python, Shell
  • Experience and a deep understanding of Kubernetes.
  • Experience in Continuous Integration and Delivery.
  • Work collaboratively with software engineers to define infrastructure and deployment requirements
  • Provision, configure and maintain AWS cloud infrastructure
  • Ensure configuration and compliance with configuration management tools
  • Administer and troubleshoot Linux-based systems
  • Troubleshoot problems across a wide array of services and functional areas
  • Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
  • Perform infrastructure cost analysis and optimization
 
Tech used:
  • AWS
  • Docker
  • Kubernetes
  • Envoy
  • Istio
  • Jenkins
  • Cloud Security & SIEM stacks
  • Terraform
Read more
css corp
Agency job
via staff hire solutions by Purvaja Patidar
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹11L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+9 more
Design and implement cloud solutions, build MLOps on cloud (GCP) Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools; Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality. Data science models testing, validation and test automation. Communicate with a team of data scientists, data engineers and architects, and document the processes. Required Qualifications: Ability to design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions (GCP) Experience with MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., experience with Docker and Kubernetes, OpenShift. Programming languages like Python, Go, Ruby or Bash, a good understanding of Linux, and knowledge of frameworks such as sci-kit-learn, Keras, PyTorch, Tensorflow, etc. Ability to understand tools used by data scientists and experience with software development and test automation. Fluent in English, good communication skills and ability to work in a team. Desired Qualifications: Bachelor’s degree in Computer Science or Software Engineering Experience in using GCP services. Good to have Google Cloud Certification
Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Talent500
Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
skill iconPython
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
IntelliFlow Solutions Pvt Ltd

at IntelliFlow Solutions Pvt Ltd

2 candid answers
Divyashree Abhilash
Posted by Divyashree Abhilash
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹12L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
IntelliFlow.ai is a next-gen technology SaaS Platform company providing tools for companies to design, build and deploy enterprise applications with speed and scale. It innovates and simplifies the application development process through its flagship product, IntelliFlow. It allows business engineers and developers to build enterprise-grade applications to run frictionless operations through rapid development and process automation. IntelliFlow is a low-code platform to make business better with faster time-to-market and succeed sooner.

Looking for an experienced candidate with strong development and programming experience, knowledge preferred-

  • Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
  • Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
  • Proficient with Unix systems and bash
  • Proficient with git/GitHub/GitLab/bitbucket

 

Desired skills-

  • Docker
  • Kubernetes
  • Jenkins
  • Experience in any scripting language (Phyton, Shell Scripting, Java Script)
  • NGINX / Load Balancer
  • Splunk / ETL tools
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
TrustCheckr

at TrustCheckr

4 recruiters
Anand Gopalakrishna
Posted by Anand Gopalakrishna
Bengaluru (Bangalore)
1 - 3 yrs
₹8L - ₹10L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconMongoDB
+8 more
Full Stack Development - AWS development and deployment experience using AWS API calls
Exposure to following AWS modules/services / GCP / Modules / Services - Both Skills are ok Amazon S3, EC2, Exposure to any AWS-supported database, Mongo DB collections, Devops, 
AWS toolkit / FlaskREST API development or NodeJS/ReactJS, Web scraping experience, API integration of the various sources for Fraud Solutions, Should be good at communication, Should be able to lead team technically in resolving any technical issues,                                 Should be able to own modules which are critical to business needs, Build and Nurture the young team members in the organization,
Aware of Microservices GOOD to HAVE Familiar with UI development 
(Directed Acyclic graph)CI/CD with AWS
UI with React JS
Read more
Deep-Rooted.co (formerly Clover)

at Deep-Rooted.co (formerly Clover)

6 candid answers
1 video
Likhithaa D
Posted by Likhithaa D
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹15L / yr
Mobile App Testing (QA)
Software Testing (QA)
Flutter Test
Playwright
Google Cloud Platform (GCP)
+2 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun is thrown in.

 

Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding to date from investors including ACCEL, Omnivore, Mayfield among others.

 

We began our journey in August 2020. Deep-Rooted.Co is helping transform the Agri tech space in India and we are looking to build out our design team. We are a Demand-backed supply chain for perishables that is focused on quality, consistency, traceability, and a high degree of predictability. We work closely with our farming commuting enabling fresh vegetables to reach our customers. We are growing at breakneck speed and are rapidly expanding into new markets across India over the next 6 to 18 months. We deliver fresh-off-the-farm produce to our customers’ doorstep within just 16 to 20 hours of harvest to millions of people in Bangalore, Hyderabad, and Chennai. On a journey of expansion to newer cities which will be managed seamlessly through a Tech platform that has been designed and built to transform the Agri-Tech sector.

 

Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.

 

Purpose of the role:

  • As a start-up, we have multiple flutter apps used by customers and operation folks. We need you to enlist various scenarios and automate their test cases.
  • We release continuously every week; regression suite and exploratory test suites are extremely important for you to develop and maintain.
  • Our web interfaces include those used by operations and our customer-facing websites. Testing the same and automation will be owned by you.
  • Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
  • Understand the business problem, validate the same using the technology and certify for production - no hand-offs - full path to production is yours.
  • You will deal with mobile-based front-ends, web front ends, REST APIs, and deal with the growing challenges.

 

 

Technical Expertise:

  • Good knowledge and experience with programming languages - Dart, Flutter.
  • Good knowledge and experience with mobile app testing/web app testing
  • Experience with automation testing frameworks - Flutter Test and Playwright (Web)
  • Experience GCP cloud platform
  • Experience with testing Approaches - Exploratory, Edge Case, and Automation.
  • Ability to create Test Plans and Test cases given business scenarios and features.
  • Have a passion for technology and computer science!! - We need an engineer with fire in the belly!!
Read more
BetterPlace

at BetterPlace

1 video
4 recruiters
Sikha Dash
Posted by Sikha Dash
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+6 more

We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.

What we are looking for

  • Must have strong knowledge of Kubernetes and Helm3
  • Should have previous experience in Dockerizing the applications.
  • Should be able to automate manual tasks using Shell or Python
  • Should have good working knowledge on AWS and GCP clouds
  • Should have previous experience working on Bitbucket, Github, or any other VCS.
  • Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
  • Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
  • Should have a good understanding of ELK Stack.
  • Exposure on Jira, confluence, and Sprints.

What you will do:

  • Mentor junior Devops engineers and improve the team’s bar
  • Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
  • Oversight of all server environments, from Dev through Production.
  • Responsible for the automation and configuration management
  • Provides stable environments for quality delivery
  • Assist with day-to-day issue management.
  • Take lead in containerising microservices
  • Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
  • Enables the automation of CI/CD
  • Implement dashboard to monitors various
  • 1-3 years of experience in DevOps
  • Experience in setting up front end best practices
  • Working in high growth startups
  • Ownership and Be Proactive.
  • Mentorship & upskilling mindset.
  • systems and applications

  • what you’ll get
    • Health Benefits
    • Innovation-driven culture
    • Smart and fun team to work with
    • Friends for life
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
Cornertree

at Cornertree

1 recruiter
Swapnil Biswas
Posted by Swapnil Biswas
Bengaluru (Bangalore)
3 - 9 yrs
₹4L - ₹8L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Data Structures
+25 more
We are looking for a strong Java Developer to join our team! As a Java Developer, you will have to have a strong under-
standing of Java and the different frameworks like Spring, etc., and have experience working on Cloud and Containers.

The Developer will perform duties and tasks to support a complete life cycle management (example: Analysis, Technical
Requirements, Design, Coding, Testing and implementation of Systems, etc.).
The Developer will work closely with the Product and Technical teams across different regions primarily Europe and will

be part of an Agile Team. The role includes research and Continuous Development of new Products based on new Tech-
nologies. This position collaborates with the operations team routinely and henceforth excellent English communication

skills (bothwritten and verbal) are essential.
 A clean coder who will always leave the code in better shape than they found it.
 A curious person who never stops learning and loves to try new things, even when theydon’t succeed on the
first try
 A team-oriented developer with the motivation to bring out the best in others
 A person who shares our appreciation for transparency and is willing to share theirexperience and knowledge
for the benefit of the team
 Someone who is willing to take a stand for something they believe in.
 Somebody that takes pride in their work and knows that development is a craftsmanship
Duties & Responsibilities
 Conducts systems and requirements analysis, creates and contributes to task lists, cost and time analysis
 Performs assigned functions and tasks to meet project plan and quality review requirements.
 Raises issues as appropriate to support effective resolutions.
 Analyzes specifications and user requirements to perform assigned applications development work.
 Assists with system and componentdesigns to meet requirements.
 Participates and documents design and code reviews to improve quality.
 Analyzes, designs, codes, tests, and documents to develop application software.

 Develops unit tests and unit test plans to deliver quality code.
 Performs applications maintenance and support functions to support problem resolution.
Qualifications:
• Bachelor’s degree in Computer Science or IT related field
• 4-7 years of experience working across different product domains in a product development/engineering role

• Good communication skills necessary to manage business requests and work with different teams across differ-
ent geographies and time-zones; experience working with remote and distributed teams will be an added ad-
vantage

• Hands-on working knowledge and experience is requiredin:
a. Java (Spring, Spring Boot, etc.)
b. Experience working in GCP or AWS or Azure
c. Experience working in Containers & Unix Platforms
d. Relational Databases (PostgreSQL, MySQL, SQL, etc.)
e. Messaging (RabbitMQ, ActiveMQ, Kafka etc.)
f. Agile Methodologies (Scrum, TDD, BDD, etc.)
g. Understanding of Microservices Architecture, Domain Driver Design, Test Driven Development and
Secure Design patterns and architecture is a must
h. Data Structures and Algorithms using Java or other Programing Languages
i. Strong organizational skills
j. Agile Methodologies (Scrum, TDD, BDD, etc.)
• Experience with several of the following tools/technologies is desirable:
a. GIT (Bit Bucket, Gitlab, etc.), Jira, Gradle, Maven, Jenkins, SharePoint, Eclipse/IntelliJ.
b. Multiple Java technologies around Spring, Spring Bootetc.
c. Design Patterns and implementing the Design Patterns
d. Development of Complex Application and System Architectures
e. NoSQL Databases (Redis, Mongo, etc.)
f. Experience working with CI/CD pipelines with for example GitHub Actions.
• Knowledge of the following technologies is a plus:
a. Other Programming Languages (NodeJS, etc.)
b. Continuous Integration and Continuous Delivery Tools like Jenkins, Git, etc.
c. Application Servers like Tomcat, etc.
d. HTML5, CSS, AJAX, React
e. Full stack development
f. Secure Development based on OWASP standards
Read more
Bootlabs Technologies Private Limited
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Required

• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.

o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions

o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.

• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.

• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.

• Scripting experience (Bash/python), automation in pipelines when required, system service.

• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.

Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
phani kalyan
Posted by phani kalyan
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more
  • Candidate should have good Platform experience on Azure with Terraform.
  • The devops engineer  needs to help developers, create the Pipelines and K8s Deployment Manifests.
  • Good to have experience on migrating data from (AWS) to Azure. 
  • To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
  • VMs to be provisioned on Azure Cloud and managed.
  • Good hands on experience of Networking on Cloud is required.
  • Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
  • Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential. 
  • Requirement is  to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
  • The experience in AWS is desirable 
  • Python experience is optional however Power shell is mandatory
  • Know-how on the use of GitHub
Administration of Azure Kubernetes services
Read more
Unscript AI

at Unscript AI

2 candid answers
1 recruiter
Ritwika Chowdhury
Posted by Ritwika Chowdhury
Bengaluru (Bangalore)
3 - 6 yrs
₹25L - ₹50L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconNextJs (Next.js)
skill iconKubernetes
+5 more

About Us


UnScript.AI is a Deep-Tech company with a focus on creating highly engaging interactive videos solving conversion and engagement problems leveraging the most recent breakthroughs in Artificial Intelligence.


Responsibilities:


Full Stack Engineers at UnScript.AI are involved in all parts of the product life cycle: idea generation, design, planning, execution, and shipping. On day-to-day work, we create reliable, scalable and highly performant systems. A commitment to teamwork, hustle and good communication skills are our key requirements. As Full Stack Engineer, you will lead end-to-end development of modules with complete ownership of design, planning, coding & delivery.


  • The Software Development Engineer's core responsibilities include working on highly maintainable and unit-tested software components/systems that address real-world problems.
  • Ensure quality at every level be it problem-solving, design, code or bug fixes.
  • Should be able to collaborate with product managers, architects and other stakeholders to ensure smooth execution of sprints.
  • Own and unblock users on production issues, able to troubleshoot and fix production issues on priority.

SKILL SETS REQUIRED: Kubernetes, AWS/any other cloud, NodeJS, JavaScript, React/Next, Design thinking, API Framework


REQUIRED QUALIFICATIONS


  • B.Tech or higher in Computer Science from a premier institute. (We are willing to waive this requirement if you are an exceptional programmer).

  • You are a strong coder, Love to explore multiple languages and tech stacks; We are language-agnostic, and we love people who like to explore multiple stacks. We’re looking for people who can write clean, effective code.

  • Building scalable & performant web systems with clear focus on reusable modules.

  • You are comfortable in a high paced environment and can respond to urgent (and at times ambiguous) requests

  • Ability to translate fuzzy business problems into technical problems & come up with design, estimates, planning, execution & deliver the solution independently.

Follow Agile development model to incrementally build out the applications with regular reviews with Product


  • Knowledge in AWS would be great to have.

  • Ability to work with databases, data pipelines.

About Us:

UnScript is a deep tech startup that builds powerful SAAS tool to generate videos using AI. UnScript was founded by distinguished alums from IIT, with exemplary backgrounds in business and technology. UnScript has raised two rounds of funding from global VCs with Peter Thiel(Co-founder, Paypal) and Ried Hoffman ( Co-founder, Linkedin) as investors. 


The Team:

Unscript was started by Ritwika Chowdhury. Our team brings experience from other foremost institutions like IIT Kharagpur, Microsoft Research, IIT Bombay, IIIT, BCG etc. We are thrilled to be backed by some of the world's largest VC firms and angel investors. 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort