
About rplanx Technology Private Limited
About
Connect with the team
Similar jobs
e are looking for a DevOps Architect/Engineer with strong technical expertise and decent years of experience to join our IT firm. You will be responsible for facilitating the development process and operations across our organization. Besides, you should own excellent leadership skills to mentor and guide the team members.
To be successful in this job role, you should be able to streamline all DevOps practices. You should be able to review technical operations and automate the process using the right tools and techniques.
Your knowledge of the latest industry trends will prove beneficial in designing efficient practices. Moreover, you should be able to demonstrate excellent research skills and outstanding problem-solving abilities. If you possess the required skills, knowledge, and experience, then do write to us. We would be happy to meet you.
Responsibilities
- Simplifying the development process and operations
- Analyzing any setbacks or shortcomings in the operations
- Developing appropriate DevOps channels
- Setting up a continuous build environment to boost software development
- Designing and delivering efficient and comprehensive best practices
- Reviewing and managing technical operations
- Analyzing and streamlining the existing DevOps practices
- Monitoring and guiding the development team
- Using the right tools and techniques to automate technical processes
- Handling all deployment processes
- Minimizing failures and troubleshooting any technical issues
- Implementing effective DevOps solutions
- Ensuring all systems are secure and scalable
Requirements
- Bachelor’s degree in Information Technology, Computer Science or a related discipline
- Previous work experience in the IT industry
- Complete knowledge of DevOps tools like Docker, Git, Puppet, and Jenkins
- Familiarity with the latest industry trends and best practices
- Basic understanding of cloud-based environments
- Know-how of software development, testing, and configuration methodologies
- Knowledge of scripting languages like Python, JavaScript, and HTML
- Excellent analytical and research skills
- Strong leadership skills
- Ability to motivate team members
- Highly innovative individual
- Excellent problem-solving ability
- Ability to organize and manage time effectively
- Ability to work collaboratively and independently
- Good communication skills
- Ability to drive the automation process
- Demonstrating multitasking skills
- Familiarity with deployment process and tools
- Ability to maintain the confidentiality of any sensitive information
- Willingness to work in a competitive environment

Requirements:
• Previously help a DevOps Engineer or System Engineer role
• 4+ years of production Linux system admin experience in high traffic environment
• 1+ years of experience with Amazon AWS and related services (instances, ELB,
EBS, S3, etc.) and abstractions on top of AWS.
• Strong understanding of network fundamentals, IP and related services (DNS, VPN, firewalls, etc.) and
security concerns.
• Experience in running Docker and Kubernetes clusters in production.
• Love automating mundane tasks and make developers life easy
• Must be able to code in, at a minimum, Python (or Ruby) and Bash.
• Non-trivial production experience with Saltstack and/or Puppet, Composer,Jenkins, GIT
• Agile software development best practices - continuous integration, releases,branches, etc.
• Experience with modern monitoring tools; capacity planning.
• Some experience with MySQL, PostgreSQL, ElasticSearch, Node.js, and PHP is a plus.
• Self-motivated, fast learner, detail-oriented, team player with a sense of humor
Experience in managing CI/CD using Jenkins.
As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.
-
Development of automation scripts and pipelines for deployment and monitoring of new production environments.
-
Development of automation scripts for upgrades, hotfixes deployments and maintenance.
-
Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.
-
Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.
-
Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.
-
Write Knowledge articles to document known issues and best practices.
-
Conduct regression tests to validate solutions or workarounds.
-
Work in a globally distributed team.
What achievements should you have so far?
-
Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.
-
Experience with containerization, deployment, and operations.
-
Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).
-
Good experience with Linux systems and Shell scripting.
-
Basic cloud experience, preferably oriented on MS Azure.
-
Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).
-
Good Networking skills and experience.
-
Having Terraform or CloudFormation knowledge will be considered a plus.
-
Ability to analyze a task from a system perspective.
-
Excellent problem-solving and troubleshooting skills.
-
Excellent written and verbal communication skills; mastery in English and local language.
-
Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.
Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.
More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html
We’re excited if you have:
- Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
- Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
- Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
- Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
- Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
- The drive and self-motivation to understand the intricate details of a complex infrastructure environment
- Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
- Hands-on experience working with AWS
- Bonus points for knowledge of ETL pipelines and Big data architecture
- Great problem-solving skills & takes pride in your work
- Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
- Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch
What you’ll be doing:
- Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
- Demonstrate great communication skills in working with technical and non-technical audiences
- Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem
Tools we use:
Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK
What we offer:
- High pace of learning
- Opportunity to build the product from scratch
- High autonomy and ownership
- A great and ambitious team to work with
- Opportunity to work on something that really matters
- Top of the class market salary and meaningful ESOP ownership
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are looking for a DevOps Engineer with expertise in infrastructure as a code, configuration management, continuous integration, continuous deployment, automated monitoring for big data workloads, large enterprise applications, customer applications, and databases.
You will have hands-on technology expertise coupled with a background in professional services and client-facing skills. You are passionate about the best practices of cloud deployment and ensuring the customer expectation is set and met appropriately. If you love to solve problems using your skills, then join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Automate infrastructure creation with Terraform, AWS Cloud Formation
- Perform application configuration management, and application-deployment tool enabling infrastructure as code.
- Take ownership of the Build and release cycle of the customer project.
- Share the responsibility for deploying releases and conducting other operations maintenance.
- Enhance operations infrastructures such as Jenkins clusters, Bitbucket, monitoring tools (Consul), and metrics tools such as Graphite and Grafana.
- Provide operational support for the rest of the Engineering team help migrate our remaining dedicated hardware infrastructure to the cloud.
- Establish and maintain operational best practices.
- Participate in hiring culturally fit engineers in the organization, help engineers make their career paths by consulting with them.
- Design the team strategy in collaboration with founders of the organization.
What are we looking for?
- 4+ years of experience in using Terraform for IaaC
- 4+ years of configuration management and engineering for large-scale customers, ideally supporting an Agile development process.
- 4+ years of Linux or Windows Administration experience.
- 4+ years of version control systems (git), including branching and merging strategies.
- 2+ Experience in working with AWS Infrastructure, and platform services.
- 2+ Experience in cloud automation tools (Ansible, Chef).
- Exposure to working on container services like Kubernetes on AWS, ECS, and EKS
- You are extremely proactive at identifying ways to improve things and to make them more reliable.
You will be preferred if
- Expertise in multiple cloud services provider: Amazon Web Services, Microsoft Azure, Google Cloud Platform
- AWS Solutions Architect Professional or Associate Level Certificate
- AWS DevOps Professional Certificate
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus


