MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************

About MTX
About
Connect with the team
Company social profiles
Similar jobs
Job Title : Azure DevOps Engineer
Experience Required : 7+ Years
Work Mode : Remote / Hybrid
Location : Remote
Notice Period : Immediate Joiners / Serving Candidates (within 20 days only)
Interview Mode : Face-to-Face or Virtual
Open Positions : 2
Job Description :
We are seeking an experienced Azure DevOps Engineer with 7+ years of relevant experience in DevOps practices, especially around Azure infrastructure, deployment automation, and CI/CD pipeline management. The ideal candidate should have hands-on expertise with Azure DevOps, GitHub, YAML, and Azure services, along with solid communication and coordination capabilities.
Mandatory Skills : Azure DevOps, GitHub Actions, YAML, Bicep, Azure services (App Gateway, WAF, NSG, CosmosDB, Storage Accounts), Unix scripting, and Azure Fundamentals certification.
Key Responsibilities :
- Manage deployments for Dynamics 365 and proxy applications
- Run and maintain ADO pipelines and GitHub Actions
- Ensure proper status updates on ADO Boards and deployed work items
- Coordinate with QA teams to execute smoke testing post-deployment
- Communicate deployment progress across team channels effectively
- Monitor deployment cycles, approval gates, logs, and alerts
- Ensure smooth integration of infrastructure and DevOps practices
Mandatory Skills :
- Minimum 7+ years in DevOps, with strong experience in Azure DevOps (ADO).
- Proven expertise in building pipelines using Azure DevOps and GitHub.
- Proficiency in Bicep, YAML scripting, and Azure Infrastructure-as-Code (IaC).
- Hands-on with Azure services like :
- App Gateway, WAF, NSG, CosmosDB, Storage Accounts.
- vNet, Managed Identity, KeyVault, AppConfig, App Insights.
- Basic Azure Fundamentals Certification (AZ-900).
- Excellent communication skills in English.
Nice to Have :
- Experience in managing large enterprise-scale deployments.
- Familiarity with branching strategies and monitoring tools.
- Exposure to Approval Gates and Deployment Governance.
Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards

Job Title: DevOps - 3
Roles and Responsibilities:
- Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
- Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
- Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
- Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
- Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
- Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
- Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
- Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
- Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
- Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.
Must Haves
- 5-8 years of experience as Devops / SRE / Platform Engineer.
- Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
- Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
- Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
- Expertise in managing production grade Kubernetes clusters
- Experience in scripting using programming languages like Bash, Python, etc.
- Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
- Experience in Managing and building High scale API Gateway, Service Mesh, etc
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
- Have a working knowledge of a backend programming language
- Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
- A working knowledge and deep understanding of cloud security concepts
- Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
- Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment.
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
About Us
At Digilytics™, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients’ business.
Founded by Arindom Basu (Founding member of Infosys Consulting), the leadership of Digilytics™ is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics™ leadership is focused on building a values-first firm that will stand the test of time.
We are currently focused on developing a product, Revel FS, to revolutionise loan origination for mortgages and secured lending. We are also developing a second product, Revel CI, focused on improving trade (secondary) sales to consumer industry clients like auto and FMCG players.
The leadership strongly believes in the ethos of enabling intelligence across the organization. Digiliytics AI is headquartered in London, with a presence across India.
Website: http://www.digilytics.ai">www.digilytics.ai
- Know about our product
- https://www.digilytics.ai/RevEL/Digilytics">Digilytics RelEL
- https://www.digilytics.ai/RevUP/">Digilytics RelUP
- What's it like working at Digilytics https://www.digilytics.ai/about-us.html">https://www.digilytics.ai/about-us.html
- Digilytics featured in Forbes: https://bit.ly/3zDQc4z">https://bit.ly/3zDQc4z
Responsibilities
- Experience with Azure services (Virtual machines, Containers, Databases, Security/Firewall, Function Apps etc)
- Hands-on experience on Kubernetes/Docker/helm.
- Deployment of java Builds & administration/configuration of Nginx/Reverse Proxy, Load balancer, Ms-SQL, Github, Disaster Recovery,
- Linux – Must have basic knowledge- User creation/deletion, ACL, LVM etc.
- CI/CD - Azure DevOps or any other automation tool like Terraform, Jenkins etc.
- Experience with SharePoint and O365 administration
- Azure/Kubernetes certification will be preferred.
- Microsoft Partnership experience is good to have.
- Excellent understanding of required technologies
- Good interpersonal skills and the ability to communicate ideas clearly at all levels
- Ability to work in unfamiliar business areas and to use your skills to create solutions
- Ability to both work in and lead a team and to deliver and accept peer review
- Flexible approach to working environment and hours to meet the needs of the business and clients
Must Haves:
- Hands-on experience on Kubernetes/Docker/helm.
- Experience on Azure/Aws or any other cloud provider.
- Linux & CI/CD tools knowledge.
Experience & Education:
- A start up mindset with proven experience working in both smaller and larger organizations having multicultural exposure
- Between 4-9 years of experience working closely with the relevant technologies, and developing world-class software and solutions
- Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Retail Consumer Services
- A bachelor's degree, or equivalent, in Software Engineering and Computer Science
- Good experience in AWS services like Elastic Compute Cloud(EC2), IAM, RDS, API Gateway, Cognito, etc.
- Using GIT, SonarQube, Ansible, Nexus, Nagios, etc.
- Strong experience in creating, importing and launching volumes with security groups, auto-scaling, Load Balancers, Fault-tolerant
- Experience in configuring Jenkins job with related Plugins for Building, Testing, and Continuous Deployment to accomplish the complete CI/CD.
Job Summary
Creates, modifies, and maintains software applications individually or as part of a team. Provides technical leadership on a team, including training and mentoring of other team members. Provides technology and architecture direction for the team, department, and organization.
Essential Duties & Responsibilities
- Develops software applications and supporting infrastructure using established coding standards and methodologies
- Sets example for software quality through multiple levels of automated tests, including but not limited to unit, API, End to End, and load.
- Self-starter and self-organized - able to work without supervision
- Develops tooling, test harnesses and innovative solutions to understand and monitor the quality of the product
- Develops infrastructure as code to reliably deploy applications on demand or through automation
- Understands cloud managed services and builds scalable and secure applications using them
- Creates proof of concepts for new ideas that answer key questions of feasibility, desirability, and viability
- Work with other technical leaders to establish coding standards, development best practices and technology direction
- Performs thorough code reviews that promote better understanding throughout the team
- Work with architects, designers, business analysts and others to design and implement high quality software solutions
- Builds intuitive user interfaces with the end user persona in mind using front end frameworks and styling
- Assist product owners in backlog grooming, story breakdown and story estimation
- Collaborate and communicate effectively with team members and other stakeholders throughout the organization
- Document software changes for use by other engineers, quality assurance and documentation specialists
- Master the technologies, languages, and practices used by the team and project assigned
- Train others in the technologies, languages, and practices used by the team
- Trouble shoot, instrument and debug existing software resolving root causes of defective behavior
- Guide the team in setting up the infrastructure in the cloud.
- Setup the security protocols for the cloud infrastructure
- Works with the team in setting up the data hub in the cloud
- Create dashboards for the visibility of the various interactions between the cloud services
- Other duties as assigned
Experience
Education
- BA/BS in Computer Science, a related field or equivalent work experience
Minimum Qualifications
- Mastered advanced programming concepts, including object oriented programming
- Mastered technologies and tools utilized by team and project assigned
- Able to train others on general programming concepts and specific technologies
- Minimum 8 years’ experience developing software applications
Skills/Knowledge
- Must be expert in advanced programming skills and database technology
- Must be expert in at least one technology and/or language and proficient in multiple technologies and languages:
- (Specific languages needed will vary based on development department or project)
- .Net Core, C#, Java, SQL, JavaScript, Typescript, Python
- Additional desired skills:
- Single-Page Applications, Angular (v9), Ivy, RXJS, NGRX, HTML5, CSS/SASS, Web Components, Atomic Design
- Test First approach, Test Driven Development (TDD), Automated testing (Protractor, Jasmine), Newman Postman, artillery.io
- Microservices, Terraform, Jenkins, Jupyter Notebook, Docker, NPM, Yarn, Nuget, NodeJS, Git/Gerrit, LaunchDarkly
- Amazon Web Services (AWS), Lambda, S3, Cognito, Step Functions, SQS, IAM, Cloudwatch, Elasticache
- Database Design, Optimization, Replication, Partitioning/Sharding, NoSQL, PostgreSQL, MongoDB, DynamoDB, Elastic Search, PySpark, Kafka
- Agile, Scrum, Kanban, DevSecOps
- Strong problem-solving skills
- Outstanding communications and interpersonal skills
- Strong organizational skills and ability to multi-task
- Ability to track software issues to successful resolution
- Ability to work in a collaborative fast paced environment
- Setting up complex AWS data storage hub
- Well versed in setting up infrastructure security in the interactions between the planned components
- Experienced in setting up dashboards for analyzing the various operations in the AWS infra setup.
- Ability to learn new development language quickly and apply that knowledge effectively
Are you the one? Quick self-discovery test:
- Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.
- Be the first one to experiment on new-age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Use your experience in the Google cloud platform, AWS, or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud-based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Participate in technical reviews of requirements, designs, code, and other artifacts
- Identify and keep abreast of new technical concepts in the google cloud platform
- Security, Risk, and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas.
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality.
- Highly organised and efficient.
- Confident working with others to inspire a high-quality standard.
Experience :
- 4-8 years experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems and/OR Windows servers
- Specialize in one or two cloud deployment platforms: AWS, GCP
- Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine, API Gateway, AppSync and ServiceMesh)
- Experience in one or more scripting language-Python, Bash
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
- DevOps Technologies (AWS DevOps, Jenkins, Git, Maven)
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef, Packer
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
Education :
- Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 3-8 years of experience with hands-on experience in Cloud Computing (AWS/GCP) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.








-(1).png&w=256&q=75)
