Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
About EnterpriseMinds
About
Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance and growth.
Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalising go-to-market models and not just explore possibilities.
We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership and collaboration our people work in a spirit of co-creation, co-innovation and co-development to engineer next-generation software products with the help of accelerators.
Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions.
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes.
We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We constantly self-asses and realign to work with each customer in the most impactful manner.
Photos
Connect with the team
Company social profiles
Similar jobs
Role: DevOps engineer
We are looking for an experienced DevOps engineer that will work closely with our client’s team to establish DevOps practice.
Your role will include establishing configuration management, automating the infrastructure, implementing continuous integration, and training the team in DevOps best practices to achieve a continuously deployable system.
You will be part of a continually growing team. You will have the chance to be creative and think of new ideas. You might also get the opportunity to work on some open-source projects, ranging from small to large.
What you’ll be doing – your role
Duties and tasks are varied and complex, and may require independent judgement; you should be fully competent in your own area of expertise. Some of the responsibilities associated with the role include:
- Improve CI/CD tooling
- Implement and improve monitoring and alerting
- Help support daily operations through the use of automation and assist in building a DevOps culture with our engineers for a better all-around software development and deployment experience
- Develop and maintain solutions for highly resilient services and infrastructure
- Implement automation to help deploy our services and maintain their operational health
- Contribute to the understanding of how our services are being used and help plan the capacity needs for future growth
What do we expect – experience and skills:
- Bachelor’s degree in Computer Science or related technical field, involving coding
- 3+ years of experience running large-scale customer-facing services
- 2-3 years of DevOps experience
- A strong desire and aptitude for system automation defines success in this role
- Linux experience, including expertise in system installation, configuration, administration, troubleshooting
- Experience with cloud-based providers such as AWS
- Experience with Kubernetes
- Experience with web-based API/restful service
- Experience with Configuration Management and Infrastructure as Code platforms (Terraform)
- Experience with at least one scripting language (Python, Bash, JavaScript)
- Methodical approach to troubleshooting and documenting issues
- Experience in Docker orchestration and management
- Experience with implementing and maintaining CI/CD pipelines (mainly GitHub Actions and ArgoCD)
- Experience in implementing comprehensive monitoring and logging solutions using Grafana, Prometheus, and Loki.
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
Why LiftOff?
We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.
Many on the team are serial entrepreneurs with a history of successful exits.
As a Devops Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.
Must Have
*Work experience of at least 2 years with Kubernetes.
*Hands-on experience working with Kubernetes. Preferably on Azure Cloud.
*Well-versed with Kubectl
*Experience in using Azure Monitor, setting up analytics and reports for Azure containers and services.
*Monitoring and observability
*Setting Alerts and auto-scaling
Nice to have
*Scripting and automation
*Experience with Jenkins or any sort of CI/CD pipelines
*Past experience in setting up cloud infrastructure, configurations and database backups
*Experience with Azure App Service
*Experience of setting up web socket-based applications.
*Working knowledge of Azure APIM
We are a group of passionate people driven by core values. We strive to make every process transparent and have flexible work timings along with excellent startup culture and vibe.
Looking for an experienced candidate with strong development and programming experience, knowledge preferred-
- Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
- Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
- Proficient with Unix systems and bash
- Proficient with git/GitHub/GitLab/bitbucket
Desired skills-
- Docker
- Kubernetes
- Jenkins
- Experience in any scripting language (Phyton, Shell Scripting, Java Script)
- NGINX / Load Balancer
- Splunk / ETL tools
We are hiring DevOps Engineers for luxury-commerce platform that is well-funded and is now ready for its next level of growth. It is backed by reputed investors and is already a leader in its space. The focus for the coming years will be heavily on scaling the platform through technology. Market-driven competitive salary or the right candidate
Job Title : DevOps System Engineer
Responsibilities:
- Implementing, maintaining, monitoring, and supporting the IT infrastructure
- Writing scripts for service quality analysis, monitoring, and operation
- Designing procedures for system troubleshooting and maintenance
- Investigating and resolving technical issues by deploying updates/fixes
- Implementing automation tools and frameworks for automatic code deployment (CI/CD)
- Quality control and management of the codebase
- Ownership of infrastructure and deployments in various environments
Requirements:
- Degree in Computer Science, Engineering or a related field
- Prior experience as a DevOps engineer
- Good knowledge of various operating systems - Linux, Windows, Mac.
- Good Knowledge of Networking, virtualization, Containerization technologies.
- Familiarity with software release management and deployment (Git, CI/CD)
- Familiarity with one or more popular cloud platforms such as AWS, Azure, etc.
- Solid understanding of DevOps principles and practices
- Knowledge of systems and platforms security
- Good problem-solving skills and attention to detail
Skills: Linux, Networking, Docker, Kubernetes, AWS/Azure, Git/GitHub, Jenkins, Selenium, Puppet/Chef/Ansible, Nagios
Experience : 5+ years
Location: Prabhadevi, Mumbai
Interested candidates can apply with their updated profiles.
Regards,
HR Team
Aza Fashions
Job description
The ideal candidate is a self-motivated, multi-tasker, and demonstrated team player. You will be a lead developer responsible for the development of new software security policies and enhancements to security on existing products. You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
Responsibilities
- Consulting with management on the operational requirements of software solutions.
- Contributing expertise on information system options, risk, and operational impact.
- Mentoring junior software developers in gaining experience and assuming DevOps responsibilities.
- Managing the installation and configuration of solutions.
- Collaborating with developers on software requirements, as well as interpreting test stage data.
- Developing interface simulators and designing automated module deployments.
- Completing code and script updates, as well as resolving product implementation errors.
- Overseeing routine maintenance procedures and performing diagnostic tests.
- Documenting processes and monitoring performance metrics.
- Conforming to best practices in network administration and cybersecurity.
Qualifications
- Minimum of 2 years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructure such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail and other services provided by AWS.
- Experience Building a multi-region highly available auto-scaling infrastructure that optimises performance and cost. plan for future infrastructure as well as Maintain & optimise existing infrastructure.
- Conceptualise, architect and build automated deployment pipelines in a CI/CD environment like Jenkins.
- Conceptualise, architect and build a containerised infrastructure using Docker, Mesosphere or similar SaaS platforms.
- Conceptualise, architect and build a secured network utilising VPCs with inputs from the security team.
- Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status.
- Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability.
- Minimum 2 years’ experience in Ansible.
- Must have written playbook to automate provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have had prior experience automating deployments to production and lower environments.
- Experience with APM tools like New Relic and log management tools.
- Our entire platform is hosted on AWS, comprising of web applications, webservices, RDS, Redis and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS.
- Essential Functions System Architecture Process Design and Implementation
- Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Establishing and enforcing systems monitoring tools and standards
- Establishing and enforcing Risk Assessment policies and standards
- Establishing and enforcing Escalation policies and standards
Senior DevOps Engineer (8-12 yrs Exp)
Job Description:
We are looking for an experienced and enthusiastic DevOps Engineer. As our new DevOps
Engineer, you will be in charge of the specification and documentation of the new project
features. In addition, you will be developing new features and writing scripts for automation
using Java/BitBucket/Python/Bash.
Roles and Responsibilities:
• Deploy updates and fixes
• Utilize various open source technologies
• Need to have hands on experience on automation tools like Docker / Jenkins /
Puppet etc.
• Build independent web based tools, micro-services and solutions
• Write scripts and automation using Java/BitBucket/Python/Bash.
• Configure and manage data sources like MySQL, Mongo, Elastic search, Redis etc
• Understand how various systems work
• Manage code deployments, fixes, updates and related processes.
• Understand how IT operations are managed
• Work with CI and CD tools, and source control such as GIT and SVN.
• Experience with project management and workflow tools such as Agile, Redmine,
WorkFront, Scrum/Kanban/SAFe, etc.
• Build tools to reduce occurrences of errors and improve customer experience
• Develop software to integrate with internal back-end systems
• Perform root cause analysis for production errors
• Design procedures for system troubleshooting and maintenance
Requirements:
• More than six years of experience in a DevOps Engineer role (or similar role);
experience in software development and infrastructure development is a mandatory.
• Bachelor’s degree or higher in engineering or related field
• Proficiency in deploying and maintaining web applications
• Ability to construct and execute network, server, and application status monitoring
• Knowledge of software automation production systems, including code deployment
• Working knowledge of software development methodologies
• Previous experience with high-performance and high-availability open source web
technologies
• Strong experience with Linux-based infrastructures, Linux/Unix administration, and
AWS.
• Strong communication skills and ability to explain protocol and processes with team
and management.
• Solid team player.
Job Description:
Mandatory Skills:
Should have strong working experience with Cloud technologies like AWS and Azure.
Should have strong working experience with CI/CD tools like Jenkins and Rundeck.
Must have experience with configuration management tools like Ansible.
Must have working knowledge on tools like Terraform.
Must be good at Scripting Languages like shell scripting and python.
Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.
Preferable skills:
Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.
Experience in scripting languages like groovy.
Experience with GCP
Summary & Responsibilities:
Write build pipelines and IaaC (ARM templates, terraform or cloud formation).
Develop ansible playbooks to install and configure various products.
Implement Jenkins and Rundeck jobs( and pipelines).
Must be a self-starter and be able to work well in a fast paced, dynamic environment
Work independently and resolve issues with minimal supervision.
Strong desire to learn new technologies and techniques
Strong communication (written / verbal ) skills
Qualification:
Bachelor's degree in Computer Science or equivalent.
4+ years of experience in DevOps and AWS.
2+ years of experience in Python, Shell scripting and Azure.
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.