
LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills

About LogiNext
About
LogiNext is amongst the fastest growing tech company, providing solutions to simplify and automate the ecosphere of logistics and supply chain management. Our aim is to organize the daunting process of logistics and supply chain planning, with an array of SaaS driven by the most robust enterprise solutions globally.
Our clientele is spread across the globe and we empower them to optimize their supply chain operations by unique data capturing, advanced analytics and visualization. From inception, LogiNext has been an industry leader and recipient of awards like NetApp's Innovative Tech Company of the year, Entrepreneur's Logistics Firm of the Year, Aegis's innovation in Big Data, CIO Choice Award for best supply chain logistics cloud solutions, etc.
Backed by influential industry leaders like PayTM and Indian Angel Network and with partners like IBM, Microsoft, Google, AWS and Samsung, LogiNext has achieved exponential success in a very short span of time and is set to exceed 300% growth by the end of 2016. The true growth hackers, who paved way for this success are the people working exceptionally hard and adding value to our organisation. Our brand ambassadors - that's how we address our people, bring unique values, discipline and problem-solving skills to nurture the innovative and entrepreneurial work culture at LogiNext. Passion, versatility, expertise and a hunger for success is the Mantra chanted by every Logi-Nexter!
Company video


Connect with the team
Similar jobs
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
- 3+ years of relevant experience
- 2+ years experience with AWS (EC2, ECS, RDS, Elastic Cache, etc)
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation, etc)
- Experience in setting CI/CD pipelines from scratch
- Knowledge of setting up and securing networks (VPN, Intranet, VPC, Peering, etc)
- Understanding of common security issues
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are looking for a DevOps Engineer with expertise in infrastructure as a code, configuration management, continuous integration, continuous deployment, automated monitoring for big data workloads, large enterprise applications, customer applications, and databases.
You will have hands-on technology expertise coupled with a background in professional services and client-facing skills. You are passionate about the best practices of cloud deployment and ensuring the customer expectation is set and met appropriately. If you love to solve problems using your skills, then join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Automate infrastructure creation with Terraform, AWS Cloud Formation
- Perform application configuration management, and application-deployment tool enabling infrastructure as code.
- Take ownership of the Build and release cycle of the customer project.
- Share the responsibility for deploying releases and conducting other operations maintenance.
- Enhance operations infrastructures such as Jenkins clusters, Bitbucket, monitoring tools (Consul), and metrics tools such as Graphite and Grafana.
- Provide operational support for the rest of the Engineering team help migrate our remaining dedicated hardware infrastructure to the cloud.
- Establish and maintain operational best practices.
- Participate in hiring culturally fit engineers in the organization, help engineers make their career paths by consulting with them.
- Design the team strategy in collaboration with founders of the organization.
What are we looking for?
- 4+ years of experience in using Terraform for IaaC
- 4+ years of configuration management and engineering for large-scale customers, ideally supporting an Agile development process.
- 4+ years of Linux or Windows Administration experience.
- 4+ years of version control systems (git), including branching and merging strategies.
- 2+ Experience in working with AWS Infrastructure, and platform services.
- 2+ Experience in cloud automation tools (Ansible, Chef).
- Exposure to working on container services like Kubernetes on AWS, ECS, and EKS
- You are extremely proactive at identifying ways to improve things and to make them more reliable.
You will be preferred if
- Expertise in multiple cloud services provider: Amazon Web Services, Microsoft Azure, Google Cloud Platform
- AWS Solutions Architect Professional or Associate Level Certificate
- AWS DevOps Professional Certificate
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
Job Description
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Key Responsibilities and Must-have skills:
- Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
- Be a customer advocate with obsession for excellence delivering measurable success for Intuitive’s customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
- Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies and risk management strategies
- Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
- Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
- Experience with application discovery preferably with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
- Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
- Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
- Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
- Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
- Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
- Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
- Hands-on Experience with application build/release processes CI/CD pipelines
- Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Additional Requirements:
- Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
- Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
- Assist Sales and Marketing team in creating sales and marketing collateral
- Write whitepapers and technology blogs to be published on social media and Intuitive website
- Create case studies for projects successfully executed by Intuitive delivery team
- Conduct sales enablement sessions to coach sales team on new offerings
- Flexibility with work hours supporting customer’s requirement and collaboration with global delivery teams
- Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
- Strong passion for modern technology exploration and development
- Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
- Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
- Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
- Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus
- Bachelor’s and/or master’s degree in Computer Science, Computer Engineering or related technical discipline
- About 5 years of professional experience supporting AWS cloud environments
- Certified Amazon Architect Associate or Architect
- Experience serving as lead (shift management, reporting) will be a plus
- AWS Architect Certified Solution Architect Professional (Must have)
- Minimum 4yrs experience, maximum 8 years’ experience.
- 100% work from office in Hyderabad
- Very fluent in English
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience








