Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.
Similar jobs
LogiNext is looking for a technically savvy and passionate Junior DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
Knowledge in building secure, high-performing and scalable infrastructure. Experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 0 to 1 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Knowledge in Linux/Unix Administration and Python/Shell Scripting Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
- Strong communication skills (written and verbal)
- Responsive, reliable and results oriented with the ability to execute on aggressive plans
- A background in software development, with experience of working in an agile product software development environment
- An understanding of modern deployment tools (Git, Bitbucket, Jenkins, etc.), workflow tools (Jira, Confluence) and practices (Agile (SCRUM), DevOps, etc.)
- Expert level experience with AWS tools, technologies and APIs associated with it - IAM, Cloud-Formation, Cloud Watch, AMIs, SNS, EC2, EBS, EFS, S3, RDS, VPC, ELB, IAM, Route 53, Security Groups, Lambda, VPC etc.
- Hands on experience with Kubernetes (EKS preferred)
- Strong DevOps skills across CI/CD and configuration management using Jenkins, Ansible, Terraform, Docker.
- Experience provisioning and spinning up AWS Clusters using Terraform, Helm, Helm Charts
- Ability to work across multiple projects simultaneously
- Ability to manage and work with teams and customers across the globe
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
Below is the Job Description for the position of DevOps Azure Engineer in Xceedance co.
Qualifications BE/ B.Tech/ MCA in computer science
Key Requirement for the Position Develop Azure application design and connectivity patterns, Azure networking topologies, and Azure storage facilities.
• Run code conformance tools as part of releases.
• Design Azure app service web app by using Azure CLI, PowerShell, and other tools.
• Implement containerized solution using Docker and Azure Kubernetes Service
• Automating the build and deployment process through Azure DevOps approach and tools from development to production
• Design and implement CI/CD pipelines
• Script and update build and deployments.
• Coordinate environment usage and alignment.
• Develop, maintain, and optimize automated deployments code for development, test, staging and production environments.
• Configure the application and container platform with proactive monitoring tools and trigger alerts through communication channels
• Develop infrastructure and platform code
• Effectively contribute to building the overall knowledge and expertise of the technical team
• Provide Level 2/3 technical support
Location Noida or Gurgaon
Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
- Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
- Managing Kubernetes cluster
- Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
- Supporting developers in setting up infrastructure for services
- Manage and improve microservices infrastructure
- Managing high availability, low latency applications
- Focus on security best practices to ensure assist in security and compliance activities
Requirements
- Minimum 3 years experience as DevOps
- Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
- BASH expertise, node or python professional programming experience
- Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
- Experience setting microservices architecture
- Experience with package management and deployments
- Thorough understanding of networking.
- Understanding of all common services and protocols
- Experience in web server configuration, monitoring, network design and high availability
- Thorough understanding of DNS, VPN, SSL
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
- ArgoCD and GitHub Actions
- NodeJS Backend
- Postgres, ElasticSearch, Redis, RabbitMQ
- Whatever else you decide - we're constantly re-evaluating our stack and tools
- Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
Benefits
- Remote Option - You can work from location of your choice :)
- Reimbursement of Home Office Setup
- Competitive Salary
- Friendly atmosphere
- Flexible paid vacation policy
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech
to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering,
AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We
are one of the top & the fastest growing partners for Google Cloud and AWS globally with
over 2,500 clients successfully moved to cloud.
What we believe?
1. Best practices are overrated
○ Implementing best practices can only make one n ‘average’ .
2. Honesty and Transparency
○ We believe in naked truth. We do what we tell and tell what we do.
3. Client Partnership
○ Client - Vendor relationship: No. We partner with clients instead.
○ And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by
HAPPIER.
1. Humble: Happy people don’t carry ego around. We listen to understand; not to
respond.
2. Adaptable: We are comfortable with uncertainty. And we accept changes well. As
that’s what life's about.
3. Positive: We are super positive about work & life in general. We love to forget and
forgive. We don’t hold grudges. We don’t have time or adequate space for it.
4. Passionate: We are as passionate about the great street-food vendor across the
street as about Tesla’s new model and so on. Passion is what drives us to work and
makes us deliver the quality we deliver.
5. Innovative: Innovate or Die. We love to challenge the status quo.
6. Experimental: We encourage curiosity & making mistakes.
7. Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
1. Love for cloud: When was the last time your dinner entailed an act on “How would
‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did
the ‘Sheldon’ version of the same thing.
2. Passion for sales: When was the last time you went at a remote gas station while on
vacation, and ended up helping the gas station owner saasify his 7 gas stations
across other geographies.
3. Compassion for customers: You listen more than you speak. When you do speak,
people feel the need to listen.
4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can
attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions
to help move & improve infra from on-premise to cloud or help optimize cloud spend from
one public cloud to the other.
1. Be the first one to experiment on new age cloud offerings, help define the best
practise as a thought leader for cloud, automation & Dev-Ops, be a solution
visionary and technology expert across multiple channels.
2. Continually augment skills and learn new tech as the technology and client needs
evolve
3. Use your experience in Google cloud platform, AWS or Microsoft Azure to build
hybrid-cloud solutions for customers.
4. Provide leadership to project teams, and facilitate the definition of project
deliverables around core Cloud based technology and methods.
5. Define tracking mechanisms and ensure IT standards and methodology are met;
deliver quality results.
6. Participate in technical reviews of requirements, designs, code and other artifacts
7. Identify and keep abreast of new technical concepts in google cloud platform
8. Security, Risk and Compliance - Advise customers on best practices around access
management, network setup, regulatory compliance and related areas.
Accomplishment Set
● Passionate, persuasive, articulate Cloud professional capable of quickly establishing
interest and credibility
● Good business judgment, a comfortable, open communication style, and a
willingness and ability to work with customers and teams.
● Strong service attitude and a commitment to quality.
● Highly organised and efficient.
● Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
1. Is Education overrated? Yes. We believe so. However there is no way to locate you
otherwise. So unfortunately we might have to look for a Bachelor's or Master's
degree in engineering from a reputed institute or you should be programming from
12. And the latter is better. We will find you faster if you specify the latter in some
manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the
new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives
those fancy degrees!
3. 1 - 5 years of experience with at least 2 - 3 years of hands-on experience in Cloud
Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise
environment.
4. Good analytical, communication, problem solving, and learning skills.
5. Knowledge on programming against cloud platforms such as Google Cloud Platf
Requirements and Qualifications
- Bachelor’s degree in Computer Science Engineering or in a related field
- 4+ years of experience
- Excellent analytical and problem-solving skills
- Strong knowledge of Linux systems and internals
- Programming experience in Python/Shell scripting
- Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront, Route53, etc
- Experience in containerization (Docker) and container orchestration (Kubernetes)
- Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform, Helm
- Experience with SQL & NoSQL databases such as MySql, MongoDB, and ElasticSearch
- Debugging and troubleshooting skills using tools such as strace, tcpdump, etc
- Good understanding of networking protocol and security concerns (VPN, VPC, IG, NAT, AZ, Subnet)
- Experience with monitoring and data analysis tools such as Prometheus, EFK, etc
- Good communication & collaboration skills and attention to details
- Participation in rotating on-call duties
- He has to perform architectural analysis, and he should know how to design enterprise-level systems.
- He should know how to design and simulate tools for the perfect delivery of systems.
- He should know how to design, develop, and maintain systems, processes, procedures to deliver a high-quality service design.
- He has to work with other members of a team and other departments to establish healthy communication and information flow.
- He should know how to deliver a high-performing solution architecture that can support the development efforts of a business.
- He has to plan, design, and configure the most typical business solutions as needed.
- He has to prepare technical documents and other presentations for multiple solutions areas.
- He has to be sure that the best practices for configuration management are carried our as it was needed.
- He has to work on customer specifications, analyze them, and conduct the best product recommendations associated with the platform
Requirements
- AWS Solution Architect 9-10 Years
- Responsible for managing applications on public cloud (AWS) infrastructure.
- Responsible for larger migrations of applications from VM to cloud/cloud-native.
- Responsible for setting up monitoring for cloud/cloud-native-based infrastructure and applications.
- MUST: AWS Solution Architect Professional certification.