Required Skills
• Automation is a part of your daily functions, so thorough familiarity with Unix Bourne shell scripting and Python is a critical survival skill.
• Integration and maintenance of automated tools
• Strong analytical and problem-solving skills
• Working experience in source control tools such as GIT/Github/Gitlab/TFS
• Have experience with modern virtualization technologies (Docker, KVM, AWS, OpenStack, or any orchestration platforms)
• Automation of deployment, customization, upgrades, and monitoring through modern DevOps tools (Ansible, Kubernetes, OpenShift, etc) • Advanced Linux admin experience
• Using Jenkins or similar tools
• Deep understanding of Container orchestration(Preferably Kubernetes )
• Strong knowledge of Object Storage(Preferably Cept on Rook)
• Experience in installing, managing & tuning microservices environments using Kubernetes & Docker both on-premise and on the cloud.
• Experience in deploying and managing spring boot applications.
• Experience in deploying and managing Python applications using Django, FastAPI, Flask.
• Experience in deploying machine learning pipelines/data pipelines using Airflow/Kubeflow /Mlflow.
• Experience in web server and reverse Proxy like Nginx, Apache Server, HAproxy
• Experience in monitoring tools like Prometheus, Grafana.
• Experience in provisioning & maintaining SQL/NoSQL databases.
Desired Skills
• Configuration software: Ansible
• Excellent communication and collaboration skills
• Good experience on Networking Technologies like a Load balancer, ACL, Firewall, VIP, DNS
• Programmatic experience with AWS, DO, or GCP storage & machine images
• Experience on various Linux distributions
• Knowledge of Azure DevOps Server
• Docker management and troubleshooting
• Familiarity with micro-services and RESTful systems
• AWS / GCP / Azure certification
• Interact with the Engineering for supporting/maintaining/designing backend infrastructure for product support
• Create fully automated global cloud infrastructure that spans multiple regions.
• Great learning attitude to the newest technology and a Team player
Similar jobs
Senior DevOps Engineer with 5+ years of experience to enhance cloud infrastructure and optimize application performance.
Qualifications:
- Bachelor’s degree in Computer Science or related field.
- 5+ years of DevOps experience with strong scripting skills (shell, Python, Ruby).
- Familiarity with open-source technologies and application development methodologies.
- Experience in optimizing both stand-alone and distributed systems.
Key Responsibilities:
- Design and maintain DevOps practices for seamless application deployment.
- Utilize AWS tools (EBS, S3, EC2) and automation technologies (Ansible, Terraform).
- Manage Docker containers and Kubernetes environments.
- Implement CI/CD pipelines with tools like Jenkins and GitLab.
- Use monitoring tools (Datadog, Prometheus) for system reliability.
- Collaborate effectively across teams and articulate technical choices.
2. Kubernetes Engineer
DevOps Systems Engineer with experience in Docker Containers, Docker Swarm, Docker Compose, Ansible, Jenkins and other tools. As part of this role she/he must work with the container best practices in design, development and implementation.
At Least 4 Years of experience in DevOps with knowledge of:
- Docker
- Docker Cloud / Containerization
- DevOps Best Practices
- Distributed Applications
- Deployment Architecture
- Atleast AWS Experience
- Exposure to Kubernetes / Serverless Architecture
Skills:
- 3-7+ years of experience in DevOps Engineering
- Strong experience with Docker Containers, Implementing Docker Containers, Container Clustering
- Experience with Docker Swarm, Docker Compose, Docker Engine
- Experience with Provisioning and managing VM's - Virtual Machines
- Experience / strong knowledge with Network Topologies, Network Research,
- Jenkins, BitBucket, Jira
- Ansible or other Automation Configuration Management System tools
- Scripting & Programming using languages such as BASH, Perl, Python, AWK, SED, PHP, Shell
- Linux Systems Administration -: Redhat
Additional Preference:
Security, SSL configuration, Best Practices
Must Haves: Openshift, Kubernetes
Location: Currently in India (also willing to relocate to UAE)
Preferred an immediate joiner with minimum 2 weeks to 1 month of Notice Period.
Add on skills: Terraform, Gitops, Jenkins, ELK
- Design cloud infrastructure that is secure, scalable, and highly available on AWS, Azure and GCP
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS, Azure, GCP cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS, Azure Infrastructure and systems
- Perform infrastructure cost analysis and optimization
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2.
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
You will work on:
You will be working on some of our clients massive scale Infrastructure and DevOps requirements - designing for microservices and large scale data analytics. You will be working on enterprise scale problems - but will be part of our agile team that delivers like a startup. You will have opportunity to be part of team that's building and managing large private cloud.
What you will do (Responsibilities):
- Work on cloud marketplace enablements for some of our clients products
- Write Kubernetes Operators to automate custom PaaS solutions
- Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings.
- Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues.
- On call responsibilities to respond to emergency situations and scheduled maintenance.
- Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration
What you bring (Skills):
- Experience with administering of and debugging on Linux based systems with programming skills in Scripting, Golang, Python among others
- Expertise in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit
- Comfortable with DevOps for Big Data databases like Terradata, Netezza, Hadoop based ecosystems, BigQuery, RedShift among others
- Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis
Great if you know (Skills):
- Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems
- Exposure to deploying and automating on any public cloud – GCP, Azure or AWS
- Private cloud experience – VMWare or OpenStack
- Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters
- Containerized applications – Docker based image builds and maintainenace.
- Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts.
Advantage Cognologix:
- Higher degree of autonomy, startup culture & small teams
- Opportunities to become expert in emerging technologies
- Remote working options for the right maturity level
- Competitive salary & family benefits
- Performance based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals.
We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Benefits Working With Us:
- Health & Wellbeing
- Learn & Grow
- Evangelize
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.
Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.
As a DevOps Engineer at Radical, you will:
Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail
We’re looking for someone who has:
Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)
It’s not essential, but great if you have:
An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS
The position is open now, and we are onboarding immediately.
Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.
Radical is based out of Delhi NCR, India, and we look forward to working with you!
We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.