We are looking for a DevOps Engineer who can deliver high-value features in short periods of time through cross-team collaboration. Capable to bring a collaborative approach to software development, testing, and deployment. To puts the team with varying objectives together to work toward more efficient and high-quality code releases.
Job Location: Sultanpur, New Delhi, 110030.
Responsibilities:
- Setup and maintain DevOps tools, Cloud monitoring tools, Cloud security.
- The DevOps engineer needs to be agile enough to wear a technical hat and manage operations simultaneously.
- Monitor and maintain highly available systems on Kubernetes (multiple production applications).
- Implement and manage CI/CD pipelines.
- Implement an auto-scaling system for our Kubernetes nodes.
- Monitoring and maintaining highly available databases (Redis, MongoDB, Postgres, and Cassandra).
- Monitor cost fluctuations and optimize.
- Provide support to developers by assigning bugs and alerting them about failures in time.
- Analyse architecture problems and caveats and provide precise solutions/tools for them.
- Incident response to system alerts during local day hours.
- System security and admin credential administration.
- Deploy and Manage GCP services (EC2, S3, VPC, Route53, Autoscaling, etc) Configuration management of all services stack.
The DevOps Engineer must have the following skills:
- Core Skill Set: DevSecOps, Cloud Native Deployments, Deployments using Docker / Kubernetes with supporting non-functioning components (i.e. API gateways, SSO / IAM, Logging / Monitoring, Load Balancers, Firewalls, etc), Deployments in On-Premise environments using modern approaches that are cloud portable.
- Minimum 2+ years of relevant experience primarily in DevOps and cloud computing.
- Prior experience working with AWS (EKS, Lambda) or other cloud platforms like GCP,
- and Azure.
- Intermediate knowledge of containers and Docker and orchestration.
- Hands-on experience with Kubernetes.
- Experience with CI/CD platforms like GitHub Actions, Jenkins, Travi, etc.
- Experienced in logging and monitoring of cloud resources with EFK, Prometheus, and Grafana.
- Good command over fundamental OS(Linux) and networking skills.
Learn about our Culture:
Wigzo is a culture-driven company powered by its employees, their vision, and their inspiration. All the employees live by the culture and values that define us. We value people for their talent, personality, competency, and ability to learn and grow.
We create a work environment that allows people to thrive and show their best performance. We believe in meritocracy. We take pride in our diversity and strive to embrace diverse voices and create an inclusive workplace.
To know more please visit: https://www.wigzo.com/employee-spotlight/
About Wigzo Technologies by Shiprocket:
Wigzo is an e-commerce marketing automation platform in which Shiprocket has acquired a majority stake. Together, we help businesses of all sizes delve deeper into data to unleash possibilities to enhance sales and income. Wigzo enables e-commerce firms to personalize each customer interaction, resulting in increased engagement, retention, loyalty, and lifetime value.
An Omnichannel marketing automation suite, Wigzo enables you to understand your customers/visitors more intelligently so you market to them what they want and not what you have. It works on real-time customer insights with real-time communication and a personalization engine that helps marketers manage basic communication and also provides dynamic email, personalized notifications, user retention, real-time engagement, real-time content, and much more.
Wigzo is holding 1000+ customers globally, it’s been 6+ years in the industry, with 100+ e-commerce brands working with us which results in 15 times business growth. For more information please visit our website: https://www.wigzo.com/
About Wigzo Technologies by Shiprocket
Similar jobs
DevOps Engineer (Azure DevOps)
Devops Engineer
BANGALORE, INDIA / ENGINEERING – DEVOPS / FULL-TIME
Who we are:
- Implement tooling and process to manage the migration of all systems changes (code & configuration) through various environments (Dev, Test, UAT, PROD, etc.) of the Skypoint platform.
- Infrastructure as Code experience: ARM (Azure Resource Manager), Terraform, or Pulumi.
- Having Azure DevOps experience: practical experience building Build/Release pipelines, CI/CD, and integration automation test suites in CI pipelines.
- Git experience: practical experience with Git workflows (commits, branches, pull requests).
- Networking knowledge: appreciation of Azure VNets/Private Links/Front Door.
- Strong affinity with software development (we are not looking for a software developer, but someone who understands the domain of software development).
- Troubleshoot, reproduce and solve challenging operational issues in a complex cloud environment, involving our load balancing platforms, and interacting with multiple microservices across our infrastructure.
- Implement tooling and process to manage regular data backups, logs processing, and access control.
- Manage ongoing maintenance activities such as certificate renewals, outage communications, and sandbox environment refreshes.
- Develop tools and procedures to support security and access control automation (provisioning & controls) in Microsoft Azure environments.
- Implement tooling and process to automate infrastructure setup and management across all our platforms.
- A bachelor’s or master’s degree in computer science or software systems with 5+ years of relevant experience.
- Overall 5+ years of experience with 3+ relevant exp as a DevOps Engineer.
- Min 2+ years of experience in Azure DevOps, Azure Pipelines, APIM ADF/ Azure Databricks.
- Industry-level certifications such as CISSP, Microsoft offers AZ-400:Microsoft Azure DevOps.
- A passion for automation of all aspects of software development, DevOps tools, and maintenance.
- Extensive experience in a DevOps team, supporting CI/CD workloads, configuration management software with tools like Ansible, Puppet, Chef, Jenkins, Docker, Azure Kubernetes, etc.
- At least 3-5 years of working experience in designing and implementing automated solutions to enable the management, and administration of Microsoft cloud infrastructure (Azure) with expert knowledge of Microsoft Azure technologies.
- Experience supporting the following technology stack and services (Virtual Machines, Kubernetes Services (AKS), Container Instances, Terraform, Ansible, Docker, HAProxy, Nginx, ELB/ALB, ELK, Grafana, ECS/EKS/Kubernetes, Fluentd, Elasticsearch) is a plus.
- Programming/scripting experience with Python, C#, shell scripting or Bash is a must.
- Experience with some aspect(s) of computer security: network security, application security, security protocols, cryptography, etc is a big plus.
- Experience with automation of log pooling, rotation, scrubbing & analysis is a plus.
- Experience with the use of service monitoring tools such as statsd, collected, ELK stack, or similar 3rd party monitoring services is a plus.
- Strong verbal and written communication skills with the ability to work effectively on shared projects with program managers, developers, and testers.
- Professional development and training opportunities.
- Company happy hours and fun team building activities.
- Flexi work hours plus enjoy the benefit of having your workstation at your home.
- Add-On Internet reimbursement within the company's permissible limits.
- Opportunity to work with US-based SaaS start-up working on new tech stacks.
- Meal cards and gift hampers
- Competitive total compensation package (Salary Bonus + Equity)
JOB SUMMARY
Have good hands-on experience on Dev Ops, AWS Admin, terraform, Infrastructure as a Code
Have knowledge of EC2, Lambda, S3, ELB, VPC, IAM, Cloud Watch, CentOS, Server Hardening
Ability to understand business requirements and translate them into technical requirements
A knack for benchmarking and optimization
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Technical Lead - DevOps
Srijan Technologies is hiring for the DevOps Lead position- Cloud Team with a permanent WFH option.
Immediate Joiners or candidates with 30 days notice period are preferred.
Requirements:-
- Minimum 4-6 Years experience in DevOps Release Engineering.
- Expert-level knowledge of Git.
- Must have great command over Kubernetes
- Certified Kubernetes Administrator
- Expert-level knowledge of Shell Scripting & Jenkins so as to maintain continuous integration/deployment infrastructure.
- Expert level of knowledge in Docker.
- Expert level of Knowledge in configuration management and provisioning toolchain; At least one of Ansible / Chef / Puppet.
- Basic level of web development experience and setup: Apache, Nginx, MySQL
- Basic level of familiarity with Agile/Scrum process and JIRA.
- Expert level of Knowledge in AWS Cloud Services.
Experience and Education
• Bachelor’s degree in engineering or equivalent.
Work experience
• 4+ years of infrastructure and operations management
Experience at a global scale.
• 4+ years of experience in operations management, including monitoring, configuration management, automation, backup, and recovery.
• Broad experience in the data center, networking, storage, server, Linux, and cloud technologies.
• Broad knowledge of release engineering: build, integration, deployment, and provisioning, including familiarity with different upgrade models.
• Demonstratable experience with executing, or being involved of, a complete end-to-end project lifecycle.
Skills
• Excellent communication and teamwork skills – both oral and written.
• Skilled at collaborating effectively with both Operations and Engineering teams.
• Process and documentation oriented.
• Attention to details. Excellent problem-solving skills.
• Ability to simplify complex situations and lead calmly through periods of crisis.
• Experience implementing and optimizing operational processes.
• Ability to lead small teams: provide technical direction, prioritize tasks to achieve goals, identify dependencies, report on progress.
Technical Skills
• Strong fluency in Linux environments is a must.
• Good SQL skills.
• Demonstratable scripting/programming skills (bash, python, ruby, or go) and the ability to develop custom tool integrations between multiple systems using their published API’s / CLI’s.
• L3, load balancer, routing, and VPN configuration.
• Kubernetes configuration and management.
• Expertise using version control systems such as Git.
• Configuration and maintenance of database technologies such as Cassandra, MariaDB, Elastic.
• Designing and configuration of open-source monitoring systems such as Nagios, Grafana, or Prometheus.
• Designing and configuration of log pipeline technologies such as ELK (Elastic Search Logstash Kibana), FluentD, GROK, rsyslog, Google Stackdriver.
• Using and writing modules for Infrastructure as Code tools such as Ansible, Terraform, helm, customize.
• Strong understanding of virtualization and containerization technologies such as VMware, Docker, and Kubernetes.
• Specific experience with Google Cloud Platform or Amazon EC2 deployments and virtual machines.c
Certa (getcerta.com) is a Silicon Valley-based tech product start-up that is automating the vendor, supplier, and other stakeholder onboarding processes (think background checks, agreements, and the works) for companies across industries and geographies. With multiple Fortune-500 and Fortune-1000 clients, at Certa's tech team, you will be working on stuff that is changing the way huge companies do business.
The DevOps engineers will work within an agile team of Engineers and Operations personnel building highly resilient, scalable and performant AWS infrastructure in an automated and efficient manner. The DevOps engineers will work alongside the Application DevOps teams and cross-functional IT teams. The engineers will be required to use their initiative to innovate to achieve maximum performance and be prepared to investigate and use new products/services offered by AWS.
Key Accountabilities
- Build and manage the AWS foundation platform to enable application deployments
- Monitor Infra Performance, build monitors and alerts.
- Engineer solutions on AWS foundation platform using Infrastructure As Code methods (e.g. Terraform)
- Integrate, configure, deploy and manage centrally provided common cloud services (e.g. IAM, networking, logging, Operating systems, Containers)
- Ensure compliance with centrally defined Security Standards
- Ensure compliancy with Operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency)
- Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities
- Engineering activities to implement patches provided centrally
- Update support and operational documentation as required
Qualifications And Experience
• 1 > year experience in working as a Devops.
• Experience of building a range of Services in AWS including EC2, S3, VPC, Lambda, RDS, Fargate/ECS, Aurora Serverless
• Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques
• Strong understanding of CI/CD and available tools
• Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
• Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
• Good leadership and teamwork skills - Works collaboratively in an agile environment with DevOps application ‘pods’ to provide AWS specific capability/skills required to deliver the service.
• Operational effectiveness - delivers solutions that align to approved design patterns and security standards
• Risk management effectiveness
• Excellent skills in at least one of following: Python, GO etc.
• Experienced in full automation and configuration management
• A successful track record of delivering complex projects and/or programs, utilizing appropriate techniques and tools to ensure and measure success
• A comprehensive understanding of risk management and proven experience of ensuring own/others’ compliance with relevant regulatory processes
- Manage systems on AWS infrastructure including application servers, database servers
- Proficiency with EC2, Redshift, RDS, Elasticsearch, MongoDB and other AWS services.
- Proficiency with managing a distributed service architecture with multiple microservices - including maintaining dev, QA, staging and production environments, managing zero-downtime releases, ensuring failure rollbacks with zero-downtime and scaling on-demand
- Containerization of workloads and rapid deployment
- Driving cost optimization while balancing performance
- Manage high availability of existing systems and proactively address system maintenance issues
- Manage AWS infrastructure configurations along with Application Load Balancers, HTTPS configurations, and network configurations (VPCs)
- Work with the software engineering team to automate code deployment
- Build and maintain tools for deployment, monitoring and operations. And troubleshoot and resolve issues in our dev, test and production environments.
- Familiarity with managing Spark based ETL pipelines a plus
- Experience in managing a team of DevOps Engineers
- Bachelor's or Master's degree in a quantitative field
- Cloud computing experience, Amazon Web Services (AWS). Bonus if you've worked on Azure, GCP and on cost optimization.
- Prior experience in working on a distributed microservices architecture and containerization.
- Strong background in Linux/Windows administration and scripting
- Experience with CI/CD pipelines, Git, deployment configuration and monitoring tools
- Working understanding of various components for Web Architecture
- A working understanding of code and script (Javascript, Angular, Python)
- Excellent communication skills, problem-solving and troubleshooting.
1. Developing a video player website where students can learn various courses, view e-books, solve tests, etc.
2. Building the product to reach higher scalability
3. Developing software to integrate with internal back-end systems
4. Working on AWS cloud platform
5. Working on Amazon Ec2, Amazon S3 bucket, and Git
6. Working on the implementation of continuous integration and deployment pipelines using Jenkins (mandatory)
7. Monitoring, troubleshooting, and diagnosing infrastructure systems (excellent knowledge required for the same)
8. Building tools to reduce the occurrences of errors and improve customer experience
9. Should have experience in MERN Stack too.
DevOps Solution Architect
Below is the Job details:
Role: DevOps Architect
Experience Level: 8-12 Years
Job Location: Hyderabad
Key Responsibilities :
Look through the various DevOps Tools/Technologies and identify the strengths and provide direction to the DevOps automation team
Out-of-box thought process on the DevOps Automation Platform implementation
Expose various tools and technologies and do POC on integration of the these tools
Evaluate Backend API's for various DevOps tools
Perform code reviews keep in context of RASUI
Mentor the team on the various E2E integrations
Be Liaison in evangelizing the automation solution currently implemented
Bring in various DevOps best Practices/Principles and participate in adoption with various app teams
Must have:
Should possess Bachelors/Masters in computer science with minimum of 8+ years of experience
Should possess minimum 3 years of strong experience in DevOps
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/SonarQube)
Should possess expertise in optimizing the DevOps stack ( Containers/Kubernetes/Monitoring )
2+ Experience in creating solutions and translate to the development team
Should have strong understanding of OOPs, SDLC (Agile Safe standards)
Proficient in Python , with a good knowledge of its ecosystems (IDEs and Frameworks)
Proficient in various cloud platforms (Azure/AWS/Google cloud platform)
Proficient in various DevOps offerings (Pivotal/OpenStack/Azure DevOps
Regards,
Talent acquisition team
Tetrasoft India
Stay home and Stay safe