.png&w=3840&q=75)
About the role:
We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.
Responsibilities
- Collaborate with development and operations teams to implement continuous integration and deployment processes.
- Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
- Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
- Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
- Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
- Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
- Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
- Must have experience of any one of the programming language (Java, .Net, Python )
- Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
- Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
- Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
- ontribute to backend development projects, ensuring robust and scalable solutions.
- Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
- Design and implement database schemas.
- Identify and implement opportunities for performance optimization and scalability of backend systems.
- Participate in code reviews, architectural discussions, and sprint planning sessions.
- Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
- Mentor junior team members and provide guidance and training on best practices in DevOps.
Required Qualifications
- BS/MS in Computer Science, Engineering, or a related field
- 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
- Strong understanding of CI/CD principles and practices.
- Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
- Experience with infrastructure automation tools like Terraform or Ansible.
- Proficient in scripting languages like PowerShell or Python.
- Experience with Linux and Windows server administration.
- Strong understanding of backend development principles and technologies.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Problem-solving and analytical skills.
- Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
- Excellent problem-solving, critical thinking, and communication skills.
- Have worked in a product based company.
What we offer:
- Competitive salary and benefits package
- Opportunity for growth and advancement within the company
- Collaborative, dynamic, and fun work environment
- Possibility to work with cutting-edge technologies and innovative projects

About FindingPi Inc
About
Our Culture
FindingPi invests in its people because they are the heart and soul of the company, and we strive to ensure that everyone feels appreciated. We are ambitious not just in terms of our business, but also in terms of promoting passionate and creative teamwork. We are a team of individuals who are proud of who we are. We invest in our people, their development, our services, and our working environment. With our clients and our in-house team, we are open and honest, each team member has a voice, and we make sure it is heard.
Life @FindingPi
We champion diversity, inclusion, and well-being to create a workplace where you value yourself and feel proud of who you are. We believe in a world where you have the freedom to explore and express yourself without judgment, no matter who you are or where you’re from. Our team is made up of academics, innovators, start-up accelerators, and care experts, all connected by a vision to build a better future for care through the combination of best-in-class carers, empowered by technology.
FindingPi is an Equal Opportunity Employer. We do not discriminate on the basis of religious beliefs, caste, creed, colour, gender. Every person is respected and is granted a right to voice their opinion.
Photos
Connect with the team
Similar jobs
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release
processes - Testing and examining code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity
threats - Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
- BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
- 4+ years of overall development experience.
- Strong understanding of cloud deployment and setup
- Hands-on experience with tools like Jenkins, Gradle etc.
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate deployment
- Design procedures for system troubleshooting and maintenance
- Skills and Qualifications
- Proficient with git and git workflows
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
- Demonstrated experience with AWS
- Knowledge of servers, networks, storage, client-server systems, and firewalls
- Strong expertise in Windows and/or Linux operating systems, including system architecture and design, as well as experience supporting and troubleshooting stability and performance issues
- Thorough understanding of and experience with virtualization technologies (e.g., VMWare/Hyper-V)
- Knowledge of core network services such as DHCP, DNS, IP routing, VLANs, layer 2/3 routing, and load balancing is required
- Experience in reading, writing or modifying PowerShell, Bash scripts & Python code.Experience using git
- Working know-how of software-defined lifecycles, product packaging, and deployments
- POSTGRESSQL or Oracle database administration (Backup, Restore, Tuning, Monitoring, Management)
- At least 2 from AWS Associate Solutions Architect, DevOps, or SysOps
- At least 1 from AWS Professional Solutions Architect, DevOps
- AWS: S3, Redshift, DynamoDB, EC2, VPC, Lambda, CloudWatch etc.
- Bigdata: Databricks, Cloudera, Glue and Athena
- DevOps: Jenkins, Bitbucket
- Automation: Terraform, Cloud Formation, Python, Shell scripting Experience in automating AWS infrastructure with Terraform.
- Experience in database technologies is a plus.
- Knowledge in all aspects of DevOps (source control, continuous integration, deployments, etc.)
- Proficiency in security implementation best practices on IAM policies, KMS encryption, Secrets Management, Network Security Groups etc.
- Experience working in the SCRUM Environment
- Cloud and virtualization-based technologies (Amazon Web Services (AWS), VMWare).
- Java Application Server Administration (Weblogic, WidlFfy, JBoss, Tomcat).
- Docker and Kubernetes (EKS)
- Linux/UNIX Administration (Amazon Linux and RedHat).
- Developing and supporting cloud infrastructure designs and implementations and guiding application development teams.
- Configuration Management tools (Chef or Puppet or ansible).
- Log aggregations tools such as Elastic and/or Splunk.
- Automate infrastructure and application deployment-related tasks using terraform.
- Automate repetitive tasks required to maintain a secure and up-to-date operational environment.
Responsibilities
- Build and support always-available private/public cloud-based software-as-a-service (SaaS) applications.
- Build AWS or other public cloud infrastructure using Terraform.
- Deploy and manage Kubernetes (EKS) based docker applications in AWS.
- Create custom OS images using Packer.
- Create and revise infrastructure and architectural designs and implementation plans and guide the implementation with operations.
- Liaison between application development, infrastructure support, and tools (IT Services) teams.
- Development and documentation of Chef recipes and/or ansible scripts. Support throughout the entire deployment lifecycle (development, quality assurance, and production).
- Help developers leverage infrastructure, application, and cloud platform features and functionality participate in code and design reviews, and support developers by building CI/CD pipelines using Bamboo, Jenkins, or Spinnaker.
- Create knowledge-sharing presentations and documentation to help developers and operations teams understand and leverage the system's capabilities.
- Learn on the job and explore new technologies with little supervision.
- Leverage scripting (BASH, Perl, Ruby, Python) to build required automation and tools on an ad-hoc basis.
Who we have in mind:
- Solid experience in building a solution on AWS or other public cloud services using Terraform.
- Excellent problem-solving skills with a desire to take on responsibility.
- Extensive knowledge in containerized application and deployment in Kubernetes
- Extensive knowledge of the Linux operating system, RHEL preferred.
- Proficiency with shell scripting.
- Experience with Java application servers.
- Experience with GiT and Subversion.
- Excellent written and verbal communication skills with the ability to communicate technical issues to non-technical and technical audiences.
- Experience working in a large-scale operational environment.
- Internet and operating system security fundamentals.
- Extensive knowledge of massively scalable systems. Linux operating system/application development desirable.
- Programming in scripting languages such as Python. Other object-oriented languages (C++, Java) are a plus.
- Experience with Configuration Management Automation tools (chef or puppet).
- Experience with virtualization, preferably on multiple hypervisors.
- BS/MS in Computer Science or equivalent experience.
- Excellent written and verbal skills.
Education or Equivalent Experience:
- Bachelor's degree or equivalent education in related fields
- Certificates of training in associated fields/equipment’s
- As a DevOps Engineer, you need to have strong experience in CI/CD pipelines.
- Setup development, testing, automation tools, and IT infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Selecting and deploying appropriate CI/CD tools
- Deploy and maintain CI/CD pipelines across multiple environments (Mobile, Web API’s & AIML)
Required skills & experience:
- 3+ years of experience as DevOps Engineer and strong working knowledge in CI/CD pipelines
- Experience administering and deploying development CI/CD using Git, BitBucket, CodeCommit, Jira, Jenkins, Maven, Gradle, etc
- Strong knowledge in Linux-based infrastructures and AWS/Azure/GCP environment
- Working knowledge on AWS (IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, etc)
- Experience with Docker containerization and clustering (Kubernetes/ECS)
- Experience on Android source(AOSP) clone, build, and automation ecosystems
- Knowledge of scripting languages such as Python, Shell, Groovy, Bash, etc
- Familiar with Android ROM development and build process
- Knowledge of Agile Software Development methodologies

Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
What we are looking for
Work closely with product & engineering groups to identify and document
infrastructure requirements.
Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
Implement infrastructure including network connectivity, virtual machines
and monitoring.
Implement and follow security guidelines, both policy and technical to
protect our customers.
Resolve incidents as escalated from monitoring solutions and lower tiers.
Identify root cause for issues and develop long term solutions to fix recurring
issues.
Ability to automate recurring tasks to increase velocity and quality.
Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
Master’s/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
5+ years overall experience.
Work experience must have included:
Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
Minimum of 4 years’ experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
Minimum 3 years of experience working in public clouds like AWS & Azure.
Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
Strong in performing production operation activities.
Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
Good at source code management tools like Bitbucket, GIT.
Configuring and utilizing monitoring and alerting tools.
Scripting to automate infrastructure and operational processes.
Hands on work to secure networks and systems.
Sound problem resolution, judgment, negotiating and decision making skills
Ability to manage and deliver multiple project phases at the same time
Strong analytical and organizational skills
Excellent written and verbal communication skills
Interview focus areas
Networks, systems, monitoring
AWS (EC2, S3, VPC)
Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
Culture fit, agility, bias for action, ownership, communication

