Cutshort logo
codersbrain logo
Senior Solution Architect
Senior Solution Architect
codersbrain's logo

Senior Solution Architect

Tanuj Uppal's profile picture
Posted by Tanuj Uppal
10 - 15 yrs
₹10L - ₹15L / yr
Bengaluru (Bangalore)
Skills
Microsoft Windows Azure
Snowflake
Delivery Management
ETL
PySpark
SQL
Project delivery
  • Sr. Solution Architect 
  • Job Location – Bangalore
  • Need candidates who can join in 15 days or less.
  • Overall, 12-15 years of experience.

 

Looking for this tech stack in a Sr. Solution Architect (who also has a Delivery Manager background). Someone who has heavy business and IT stakeholder collaboration and negotiation skills, someone who can provide thought leadership, collaborate in the development of Product roadmaps, influence decisions, negotiate effectively with business and IT stakeholders, etc.

 

  • Building data pipelines using Azure data tools and services (Azure Data Factory, Azure Databricks, Azure Function, Spark, Azure Blob/ADLS, Azure SQL, Snowflake..)
  • Administration of cloud infrastructure in public clouds such as Azure
  • Monitoring cloud infrastructure, applications, big data pipelines and ETL workflows
  • Managing outages, customer escalations, crisis management, and other similar circumstances.
  • Understanding of DevOps tools and environments like Azure DevOps, Jenkins, Git, Ansible, Terraform.
  • SQL, Spark SQL, Python, PySpark
  • Familiarity with agile software delivery methodologies
  • Proven experience collaborating with global Product Team members, including Business Stakeholders located in NA


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About codersbrain

Founded :
2015
Type :
Services
Size :
20-100
Stage :
Bootstrapped

About

Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers. We achieved our success because of how successfully we integrate with our clients.
Read more

Connect with the team

Profile picture
Shreya Trivedi

Company social profiles

linkedintwitterfacebook

Similar jobs

AdTech Industry
AdTech Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Agentic AI Platform
Agentic AI Platform
Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Mumbai, Gurugram, Pune, Hyderabad, Chennai
3 - 6 yrs
₹5L - ₹20L / yr
IBM Sterling Integrator Developer
IBM Sterling B2B Integrator
Shell Scripting
skill iconPython
SQL
+1 more

Job Title : IBM Sterling Integrator Developer

Experience : 3 to 5 Years

Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune

Employment Type : Full-Time


Job Description :

We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.

The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.

Key Responsibilities :

  • Develop, configure, and maintain IBM Sterling Integrator solutions.
  • Design and implement integration solutions using IBM Sterling.
  • Collaborate with cross-functional teams to gather requirements and provide solutions.
  • Work with custom languages and scripting to enhance and automate integration processes.
  • Ensure optimal performance and security of integration systems.

Must-Have Skills :

  • Hands-on experience with IBM Sterling Integrator and associated integration tools.
  • Proficiency in at least one custom scripting language.
  • Strong command over Shell scripting, Python, and SQL (mandatory).
  • Good understanding of EDI standards and protocols is a plus.

Interview Process :

  • 2 Rounds of Technical Interviews.

Additional Information :

  • Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Read more
Deqode
at Deqode
1 recruiter
Mokshada Solanki
Posted by Mokshada Solanki
Bengaluru (Bangalore), Mumbai, Pune, Gurugram
4 - 5 yrs
₹4L - ₹20L / yr
SQL
skill iconAmazon Web Services (AWS)
Migration
PySpark
ETL

Job Summary:

Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.


Key Responsibilities:

  • Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
  • Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
  • Work on data migration tasks in AWS environments.
  • Monitor and improve database performance; automate key performance indicators and reports.
  • Collaborate with cross-functional teams to support data integration and delivery requirements.
  • Write shell scripts for automation and manage ETL jobs efficiently.


Required Skills:

  • Strong experience with MySQL, complex SQL queries, and stored procedures.
  • Hands-on experience with AWS Glue, PySpark, and ETL processes.
  • Good understanding of AWS ecosystem and migration strategies.
  • Proficiency in shell scripting.
  • Strong communication and collaboration skills.


Nice to Have:

  • Working knowledge of Python.
  • Experience with AWS RDS.



Read more
Client Of PFC
Client Of PFC
Agency job
via People First Consultants by Naveed Mohd
Chennai
3 - 7 yrs
₹3L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
Windows Azure
Ansible
+5 more
Requirements:
 

·  Strong knowledge on Windows and Linux

· Experience working in Version Control Systems like git

· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.

· Basic understanding of SQL commands

· Experience working on Azure Cloud DevOps

Read more
LogiNext
at LogiNext
1 video
7 recruiters
Rakhi Daga
Posted by Rakhi Daga
Mumbai
2 - 4 yrs
₹6L - ₹11L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

LogiNext is looking for a technically savvy and passionate DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.

You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.

Responsibilities:

Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations


Requirements:

Bachelor’s degree in Computer Science, Information Technology or a related field 2 to 4 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills

Read more
Numerator
at Numerator
4 recruiters
Spurthi Mangalwedhe
Posted by Spurthi Mangalwedhe
Remote only
12 - 18 yrs
₹15L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
Terraform
Ansible
+3 more

Numerator is looking for an experienced, talented and quick-thinking DevOps Manager to join our team and work with the Global DevOps groups to keep infrastructure up to date and continuously advancing.  This is a unique opportunity where you will get the chance to work on the infrastructure of both established and greenfield products. Our technology harnesses consumer-related data in many ways including gamified mobile apps, sophisticated web crawling and enhanced Deep Learning algorithms to deliver an unmatched view of the consumer shopping experience.

As a member of the Numerator DevOps Engineering team, you will make an immediate impact as you help build out and expand our technology platforms from on-premise to the cloud across a wide range of software ecosystems.  Many of your daily tasks and engagement with applications teams will help shape how new projects are delivered at scale to meet our clients demands.

This role requires a balance between hands-on infrastructure-as-code deployments with application teams as well as working with Global DevOps Team to roll out new initiatives.

What you will get to do

  • Participate in troubleshooting and deployment of DevOps tools and solutions to application teams.

  • Adopt best-practises and deploy software solutions from Global DevOps team.

  • Look for innovative ways to improve observability, monitoring of large scale systems over a variety of technologies across the Numerator organization.

  • Lead by example, coach and solidify DevOps best practises within the Numerator product teams.

 

Requirements

 

  • 5+ years of experience in cloud-based systems in a DevOps position.

  • A passion for well architectured clean software and good understanding of software development practises specifically around reliability, availability and performance engineering.

  • Availability to participate in after-hours on-call support with your fellow engineers where necessary.

  • A pragmatic approach to identifying, documenting and articulating issues or problems in teams, software systems and infrastructure.

  • A good problem solving mindset combined with experience of troubleshooting large-scale cloud based systems.

  • Working knowledge of networking, operating systems and packaging/build systems ie. AWS Linux, Ubuntu, PIP and NPM, Terraform, Ansible etc.

  • Working knowledge of application deployment in Serverless and Kubernetes based environments in AWS, Azure and Google Cloud Platform (GCP).

  • Competence in using scripting languages such as *nix commands, jq, sed, awk and make, No-SQL/SQL languages.

  • Competent in Terraform, Ansible in cloud systems including Kubernetes.

  • Ability to manage multiple projects and systems across different teams and raise technical issues proactively within the organization.

  • Experience in managing highly redundant data stores, file systems and services both in the cloud and on-premise including both data transfer, redundancy and cost-management.

  • BSci, MSci or PhD in Computer Science or related field, or equivalent work experience.

Nice to have

  • Previous experience working in a geographically distributed team.

  • Experience in an Agile development environment.

  • Experience in CICD and Monitoring software such as Prometheus, Sumologic, Datadog, Coralogix, Snowflake/Panther, Nagios, Jenkins, Splunk etc.

  • Startup or CPG industry experience.

Read more
Synapsica Technologies Pvt Ltd
at Synapsica Technologies Pvt Ltd
6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.

Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
  • Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
  • Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
  • Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
  • Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured  metrics, testing frameworks, and debugging methodologies.
  • Technical documentation through all stages of development
  • Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation

 

Requirements

  • Minimum of 6 years of experience on Devops tools.
  • Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
  • Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
  • Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
  • Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
  • Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
  • Devops mindset and experience with Agile / SCRUM Methodology
  • Basic knowledge of Storage , Databases (SQL and noSQL)
  • Good understanding of networking technologies, HAProxy, firewalling and security.
  • Experience in Security vulnerability scans and remediation
  • Experience in API security and credentials management
  • Worked on Microservice configurations across dev/test/prod environments
  • Ability to quickly adapt new languages and technologies
  • A strong team player attitude with excellent communication skills.
  • Very high sense of ownership.
  • Deep interest and passion for technology
  • Ability to plan projects, execute them and meet the deadline
  • Excellent verbal and written English communication.
Read more
Censiusai
at Censiusai
1 recruiter
Censius Team
Posted by Censius Team
Remote only
3 - 5 yrs
₹10L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconDjango
skill iconFlask
+3 more

About the job

Our goal

We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)

 

The culture

We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.

 

The role:

Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.

On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).

By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.

 

Responsibilities

  • Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
  • Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
  • Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
  • Continually evaluate and improve DevOps processes for a cloud-native codebase.
  • Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
  • Leverage your industry experience to champion engineering best practices within the organization.

 

Qualifications

Work Experience

  • 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
  • Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
  • Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
  • 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
  • Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
  • (Bonus: worked with big data in data lakes/warehouses).
  • (Bonus: built an end-to-end ML pipeline)

Skills

  • Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
  • Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
  • Strong independent contributor as well as a team player.
  • Working knowledge of ML and familiarity with concepts of MLOps

Benefits

  • Competitive Salary
  • Work Remotely
  • Health insurance
  • Unlimited Time Off
  • Support for continual learning (free books and online courses)
  • Reimbursement for streaming services (think Netflix)
  • Reimbursement for gym or physical activity of your choice
  • Flex hours
  • Leveling Up Opportunities

 

You will excel in this role if

  • You have a product mindset. You understand, care about, and can relate to our customers.
  • You take ownership, collaborate, and follow through to the very end.
  • You love solving difficult problems, stand your ground, and get what you want from engineers.
  • Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Read more
Yojito Software Private Limited
Tushar Khairnar
Posted by Tushar Khairnar
Pune
1 - 4 yrs
₹4L - ₹8L / yr
DevOps
skill iconDocker
skill iconKubernetes
skill iconPython
SQL
+4 more

We are looking for people with programming skills in Python, SQL, Cloud Computing. Candidate should have experience in at least one of the major cloud-computing platforms - AWS/Azure/GCP. He should professioanl experience in handling applications and databases in the cloud using VMs and Docker images. He should have ability to design and develop applications for the cloud.

 

You will be responsible for

  • Leading the DevOps strategy and development of SAAS Product Deployments
  • Leading and mentoring other computer programmers.
  • Evaluating student work and providing guidance in the online courses in programming and cloud computing.

 

Desired experience/skills

Qualifications: Graduate degree in Computer Science or related field, or equivalent experience.

 

Skills:

  • Strong programming skills in Python, SQL,
  • Cloud Computing

 

Experience:

2+ years of programming experience including Python, SQL, and Cloud Computing. Familiarity with command line working environment.

 

Note: A strong programming background, in any language and cloud computing platform is required. We are flexible about the degree of familiarity needed for the specific environments Python, SQL. If you have extensive experience in one of the cloud computing platforms and less in others you should still, consider applying.

 

Soft Skills:

  • Good interpersonal, written, and verbal communication skills; including the ability to explain the concepts to others.
  • A strong understanding of algorithms and data structures, and their performance characteristics.
  • Awareness of and sensitivity to the educational goals of a multicultural population would also be desirable.
  • Detail oriented and well organized.   
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos