
- Must have a minimum of 3 years of experience in managing AWS resources and automating CI/CD pipelines.
- Strong scripting skills in PowerShell, Python or Bash be able to build and administer CI/CD pipelines.
- Knowledge of infrastructure tools like Cloud Formation, Terraform, Ansible.
- Experience with microservices and/or event-driven architecture.
- Experience using containerization technologies (Docker, ECS, Kubernetes, Mesos or Vagrant).
- Strong practical Windows and Linux system administration skills in the cloud.
- Understanding of DNS, NFS, TCP/IP and other protocols.
- Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25.
- Deep understanding of Web Sockets and their functioning. Hands on experience of ElasticCache, Redis, ECS or EKS. Installation, configuration and management of Apache or Nginx web server, Apache/Tomcat Application Server, configure SSL certificates, setup reverse proxy.
- Exposure to RDBMS (MySQL, SQL Server, Aurora, etc.) is a plus.
- Exposure to programming languages like JAVA, PHP, SQL is a plus.
- AWS Developer or AWS SysOps Administrator certification is a plus.
- AWS Solutions Architect Certification experience is a plus.
- Experience building Blue/Green, Canary or other zero down time deployment strategies, advanced understanding of VPC, EC2 Route53 IAM, Lambda is a plus.

Similar jobs


Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
- 7-10 years’ total experience, including 6+ years in a production 24/7 high-availability
- multi-site Cloud environment, including application hosting, CDN Networks, security and information protection.
- Experience of leading overall infrastructure for a complex organization and network, including 24x7 monitoring of a media website & digital properties.
- Experience in hosting and managing video streaming applications, React & Node JS based applications.
- Experience with regulatory compliance issues, as well best practices in application and network security.
- Experience in hosting services on Amazon Cloud & Google Cloud.
- Experience in performing Vulnerability Assessment at server & application level
- Experience in managing Live Streaming on digital platforms.
- Experience in managing SVN, Git code repository & Code release management.
- Partners with Technology head lead the technology infrastructure strategy and execution for the enterprise
- Planning, project management and implementation leadership, identifying opportunities for automation, cost savings, and service quality improvement.
- Provides infrastructure services vision, enables innovation and seeks to leverage market trends that can create business value consistent with the company’s requirements and expectations.
- Participate in the formulation of the company's enterprise architecture and business system plans; assessing cost and feasibility, and ensuring the plan is aligned with and supports the strategic goals of the business
- Hands-on technical depth enables direct oversight, problem-solving leadership and participation for complex infrastructure implementation, system upgrades and operational troubleshooting.
- Experience with comprehensive disaster recovery architecture and operations, including storage area network and redundant, highly-available server and network architectures.
- Leadership for delivery of 24/7 service operations and KPI compliance.
- Ensure best practices are followed for code release management & monitoring of traffic on websites & other applications.
Job Description
• Minimum 3+ yrs of Experience in DevOps with AWS Platform
• Strong AWS knowledge and experience
• Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
• Experience with IAC tools Terraform
• Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
• Significant experience with Linux operating system environments
• Experience with infrastructure scripting solutions such as Python/Shell scripting
• Must have experience in designing Infrastructure automation framework.
• Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
• Excellent problem-solving, Log Analysis and troubleshooting skills
• Experience in setting up centralized logging for system (EKS, EC2) and application
• Process-oriented with great documentation skills
• Ability to work effectively within a team and with minimal supervision
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.
We are looking for a DevOps Engineer with expertise in infrastructure as a code, configuration management, continuous integration, continuous deployment, automated monitoring for big data workloads, large enterprise applications, customer applications, and databases.
You will have hands-on technology expertise coupled with a background in professional services and client-facing skills. You are passionate about the best practices of cloud deployment and ensuring the customer expectation is set and met appropriately. If you love to solve problems using your skills, then join the Team Mactores. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.
What you will do?
- Automate infrastructure creation with Terraform, AWS Cloud Formation
- Perform application configuration management, and application-deployment tool enabling infrastructure as code.
- Take ownership of the Build and release cycle of the customer project.
- Share the responsibility for deploying releases and conducting other operations maintenance.
- Enhance operations infrastructures such as Jenkins clusters, Bitbucket, monitoring tools (Consul), and metrics tools such as Graphite and Grafana.
- Provide operational support for the rest of the Engineering team help migrate our remaining dedicated hardware infrastructure to the cloud.
- Establish and maintain operational best practices.
- Participate in hiring culturally fit engineers in the organization, help engineers make their career paths by consulting with them.
- Design the team strategy in collaboration with founders of the organization.
What are we looking for?
- 4+ years of experience in using Terraform for IaaC
- 4+ years of configuration management and engineering for large-scale customers, ideally supporting an Agile development process.
- 4+ years of Linux or Windows Administration experience.
- 4+ years of version control systems (git), including branching and merging strategies.
- 2+ Experience in working with AWS Infrastructure, and platform services.
- 2+ Experience in cloud automation tools (Ansible, Chef).
- Exposure to working on container services like Kubernetes on AWS, ECS, and EKS
- You are extremely proactive at identifying ways to improve things and to make them more reliable.
You will be preferred if
- Expertise in multiple cloud services provider: Amazon Web Services, Microsoft Azure, Google Cloud Platform
- AWS Solutions Architect Professional or Associate Level Certificate
- AWS DevOps Professional Certificate
Life at Mactores
We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.
1. Be one step ahead
2. Deliver the best
3. Be bold
4. Pay attention to the detail
5. Enjoy the challenge
6. Be curious and take action
7. Take leadership
8. Own it
9. Deliver value
10. Be collaborative
We would like you to read more details about the work culture on https://mactores.com/careers
The Path to Joining the Mactores Team
At Mactores, our recruitment process is structured around three distinct stages:
Pre-Employment Assessment:
You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.
Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.
HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.
At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
DevOps Engineer
The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.
Job Description
- Explore about the newest innovations in scalable and distributed systems.
- Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
- Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
- Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
- Playing around with large clusters on different clouds to tune your jobs or to learn.
- Researching about new technologies, proving the concepts and planning how to integrate or update.
- Be part of discussions of other projects to learn or to help.
Responsibilities
- 2+years of experience as DevOps Engineer.
- You understand actual networking to Software defined networking.
- You like containers and open source orchestration system like Kubernetes, Mesos.
- Should have experience to secure system by creating robust access policy and network restrictions enforcement.
- Should have knowledge about how applications work are very important to design distributed systems.
- Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
- You understand that provisioning a Virtual Machine is not DevOps.
- You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
- Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
- You check logs before screaming about error.
- Multiple Screens makes you more efficient.
- You are a doer who don’t say the word impossible.
- You understand the value of documentation of your work.
- You understand the Big Data ecosystem and how can you leverage cloud for it.
- You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs
Location: Bengaluru
Department: DevOps
We are looking for extraordinary infrastructure engineers to build a world class
cloud platform that scales to millions of users. You must have experience
building key portions of a highly scalable infrastructure using Amazon AWS and
should know EC2, S3, EMR like the back of your hand. You must enjoy working
in a fast-paced startup and enjoy wearing multiple hats to get the job done.
Responsibilities
● Manage AWS server farm Own AWS infrastructure automation and
support.
● Own production deployments in multiple AWS environments
● End-end backend engineering infra charter includes Dev ops,Global
deployment, Security and compliances according to latest practices.
Ability to guide the team in debugging production issues and write
best-of-the breed code.
● Drive “engineering excellence” (defects, productivity through automation,
performance of products etc) through clearly defined metrics.
● Stay current with the latest tools, technology ideas and methodologies;
share knowledge by clearly articulating results and ideas to key decision
makers.
● Hiring, mentoring and retaining a very talented team.
Requirements
● B.S. or M.S in Computer Science or a related field (math, physics,
engineering)
● 5-8 years of experience in maintaining infrastructure system/devops
● Enjoy playing with tech like nginx, haproxy, postgres, AWS, ansible,
docker, nagios, or graphite Deployment automation experience with
Puppet/Chef/Ansible/Salt Stack Work with small, tightly knit product
teams that function cohesively to move as quickly as possible.
● Determination to provide reliable and fault tolerant systems to the
application developers that consume them
● Experience in developing Java/C++ backend systems is a huge plus Be a
strong team player.
Preferred
Deep working knowledge of Linux servers and networked environments
Thorough understanding of distributed systems and the protocols they use,
including TCP/IP, RESTful APIs, SQL, NoSQL. Experience in managing a NoSQL
database (Cassandra) is a huge plus.
- 3-6 years of relevant work experience in a DevOps role.
- Deep understanding of Amazon Web Services or equivalent cloud platforms.
- Proven record of infra automation and programming skills in any of these languages - Python, Ruby, Perl, Javascript.
- Implement DevOps Industry best practices and the application of procedures to achieve a continuously deployable system
- Continuously improve and increase the capabilities of the CI/CD pipeline
- Support engineering teams in the implementation of life-cycle infrastructure solutions and documentation operations in order to meet the engineering departments quality and standards
- Participate in production outages and handle complex issues and works towards resolution



DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery

