40+ Puppet Jobs in India
Apply to 40+ Puppet Jobs on CutShort.io. Find your next job, effortlessly. Browse Puppet Jobs and apply today!
Job Description:
We are seeking a motivated DevOps intern to join our team. The intern will be responsible for deploying and maintaining applications in AWS and Azure cloud environments, as well as on client local machines when required. The intern will troubleshoot any deployment issues and ensure the high availability of the applications.
Responsibilities:
- Deploy and maintain applications in AWS and Azure cloud environments
- Deploy applications on client local machines when needed
- Troubleshoot deployment issues and ensure high availability of applications
- Collaborate with development teams to improve deployment processes
- Monitor system performance and implement optimizations
- Implement and maintain CI/CD pipelines
- Assist in implementing security best practices
Requirements:
- Currently pursuing a degree in Computer Science, Engineering, or related field
- Knowledge of cloud computing platforms (AWS, Azure)
- Familiarity with containerization technologies (Docker, Kubernetes)
- Basic understanding of networking principles
- Strong problem-solving skills
- Excellent communication skills
Nice to Have:
- Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet)
- Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack)
- Understanding of security best practices in cloud environments
Benefits:
- Hands-on experience with cutting-edge technologies.
- Opportunity to work on exciting AI and LLM projects
MangoApps builds enterprise products that makes employees at organizations across the globe more effective and productive in their day-to-day work. We are looking for techpros, great communicators and collaborators and efficient team players for this role. This is your opportunity to be part of a rapidly growing organization and gain a deeper level of expertise in AWS Cloud Infrastructure as you help us build and run a large scale, distributed, fault-tolerant software systems and infrastructure. Reach out to us if you are driven by scale and complexity challenges. The role involves -
50% Project based work:
- Drive to change and constantly improve the service that is delivered to the customer with minimum disruption.
- Perform tasks with competing priorities and adapt to changing business needs.
- Create and Managing VM's with different combinations of OS and software using EC2 / RDS / Patching.
- Manage high volume networks and large traffic websites and make recommendations for scaling when appropriate.
- Drive automation for repetitive tasks to build efficiency and ensure consistent delivery.
40 % Break/Fix:
- Apply analytical skills to assist in the resolution of complex, time-sensitive issues or escalate, when necessary, with a sense of accountability and sound personal judgment.
- Apply advanced troubleshooting techniques on a variety of operating systems (Windows, Mac, Linux) to provide unique solutions to our customers' individual needs while adhering to security and best practice standards.
- Experience troubleshooting application and service issues, implementing and communicating technical solutions.
- Demonstrated experience to resolve issues and restore service quickly.
- Possesses the tenacity to delve to the root of the issue quickly, understand why it happened, and prevent it in the future.
10% Team Collaboration:
- Proven ability to collaborate with team members to achieve successful customer outcomes.
Requirements:
- In depth understanding of AWS S3 Lifecycles, EC2 and EBS volumes, AWS Networking and VPC, AWS Regions and Zones.
- RDS performance monitoring and identifying data/sql issues using slow log query.
- Proficiency in troubleshooting using AWS CloudWatch Insights or similar tools for Application Log/Database Log monitoring.
- Hands-on experience working with ECS and Docker.
- Automation experience with puppet or Ansible.
- Proficient in scripting with bash and ruby/python.
- Proficiency in GitHub and Git Actions
- Expert level proficiency in Linux and application troubleshooting.
- Proficiency in CloudFormation or Terraform (Good to have)
Why Explore a Career at MangoApps?
- You are ready for your next challenge.
- If you're looking to make an impact, MangoApps the place for you. We are a young organization and growing fast.
- We focus on getting things done and know how to have fun while we do them.
- You want to work in a fast-paced, dynamic environment where your contribution matters. We have a team of people who bring creativity, energy and excellence to every engagement.
- The breadth of what we do means exceptional opportunities for learning and development.
- As a group, we are flat and treat everyone the same.
What are we looking for in you?
Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment
Entrepreneurial: You thrive in a fast-paced, changing environment and you’re excited by the chance to play a large role.
Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
GoGetter: Thrive in a startup mentality with a “whatever it takes attitude"
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
Requirement
- 1 to 7 years of experience with relative experience in managing development operations
- Hands-on experience with AWS
- Thorough knowledge of setting up release pipelines, and managing multiple environments like Beta, Staging, UAT, and Production
- Thorough knowledge of best cloud practices and architecture
- Hands-on with benchmarking and performance monitoring
- Identifying various bottlenecks and taking pre-emptive measures to avoid downtime
- Hands-on knowledge with at least one toolset Chef/Puppet/Ansible
- Hands-on with CloudFormation / Terraform or other Infrastructure as code is a plus.
- Thorough experience with Shell Scripting and should not know to shy away from learning new technologies or programming languages
- Experience with other cloud providers like Azure and GCP is a plus
- Should be open to R&D for creative ways to improve performance while keeping costs low
What do we want the person to do?
- Manage, Monitor and Provision Infrastructure - Majorly on AWS
- Will be responsible for maintaining 100% uptime on production servers (Site Reliability)
- Setting up a release pipeline for current releases. Automating releases for Beta, Staging & Production
- Maintaining near-production replica environments on Beta and Staging
- Automating Releases and Versioning of Static Assets (Experience with Chef/Puppet/Ansible)
- Should have hands-on experience with Build Tools like Jenkins, GitHub Actions, AWS CodeBuild etc
- Identify performance gaps and ways to fix them.
- Weekly meetings with Engineering Team to discuss the changes/upgrades. Can be related to code issues/architecture bottlenecks.
- Creative Ways of Reducing Costs of Cloud Computing
- Convert Infrastructure Deployment / Provision to Infrastructure as Code for reusability and scaling.
We are hiring DevOps Engineers for luxury-commerce platform that is well-funded and is now ready for its next level of growth. It is backed by reputed investors and is already a leader in its space. The focus for the coming years will be heavily on scaling the platform through technology. Market-driven competitive salary or the right candidate
Job Title : DevOps System Engineer
Responsibilities:
- Implementing, maintaining, monitoring, and supporting the IT infrastructure
- Writing scripts for service quality analysis, monitoring, and operation
- Designing procedures for system troubleshooting and maintenance
- Investigating and resolving technical issues by deploying updates/fixes
- Implementing automation tools and frameworks for automatic code deployment (CI/CD)
- Quality control and management of the codebase
- Ownership of infrastructure and deployments in various environments
Requirements:
- Degree in Computer Science, Engineering or a related field
- Prior experience as a DevOps engineer
- Good knowledge of various operating systems - Linux, Windows, Mac.
- Good Knowledge of Networking, virtualization, Containerization technologies.
- Familiarity with software release management and deployment (Git, CI/CD)
- Familiarity with one or more popular cloud platforms such as AWS, Azure, etc.
- Solid understanding of DevOps principles and practices
- Knowledge of systems and platforms security
- Good problem-solving skills and attention to detail
Skills: Linux, Networking, Docker, Kubernetes, AWS/Azure, Git/GitHub, Jenkins, Selenium, Puppet/Chef/Ansible, Nagios
Experience : 5+ years
Location: Prabhadevi, Mumbai
Interested candidates can apply with their updated profiles.
Regards,
HR Team
Aza Fashions
Interfaces with other processes and/or business functions to ensure they can leverage the
benefits provided by the AWS Platform process
Responsible for managing the configuration of all IaaS assets across the platforms
Hands-on python experience
Manages the entire AWS platform(Python, Flask, RESTAPI, serverless framework) and
recommend those that best meet the organization's requirements
Has a good understanding of the various AWS services, particularly: S3, Athena, Python code,
Glue, Lambda, Cloud Formation, and other AWS serverless resources.
AWS Certification is Plus
Knowledge of best practices for IT operations in an always-on, always-available service model
Responsible for the execution of the process controls, ensuring that staff comply with process
and data standards
Qualifications
Bachelor’s degree in Computer Science, Business Information Systems or relevant experience and
accomplishments
3 to 6 years of experience in the IT field
AWS Python developer
AWS, Serverless/Lambda, Middleware.
Strong AWS skills including Data Pipeline, S3, RDS, Redshift with familiarity with other components
like - Lambda, Glue, Step functions, CloudWatch
Must have created REST API with AWS Lambda.
Python relevant exp 3 years
Good to have Experience working on projects and problem solving with large scale multivendor
teams.
Good to have knowledge on Agile Development
Good knowledge on SDLC.
Hands on AWS Databases, (RDS, etc)
Good to have Unit testing exp.
Good to have CICD working knowledge.
Decent communication, as there will be client interaction and documentation.
Education (degree): Bachelor’s degree in Computer Science, Business Information Systems or relevant
experience and accomplishments
Years of Experience: 3-6 years
Technical Skills
Linux/Unix system administration
Continuous Integration/Continuous Delivery tools like Jenkins
Cloud provisioning and management – Azure, AWS, GCP
Ansible, Chef, or Puppet
Python, PowerShell & BASH
Job Details
JOB TITLE/JOB CODE: AWS Python Develop[er, III-Sr. Analyst
RC: TBD
PREFERRED LOCATION: HYDERABAD, IND
POSITION REPORTS TO: Manager USI T&I Cloud Managed Platform
CAREER LEVEL: 3
Work Location:
Hyderabad
at LogiNext
Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=
LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
- Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
- Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
- Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
- Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
- Define and build processes to identify performance bottlenecks and scaling pitfalls
- Manage robust monitoring and alerting infrastructure
- Explore new tools to improve development operations to automate daily tasks
- Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Strong background in Linux/Unix Administration and Python/Shell Scripting
- Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
- Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in query analysis, peformance tuning, database redesigning,
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills.
- Excellent leadership skill.
Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
upraisal
- Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
- Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
- Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
- Drive technical initiatives and architectural service improvements.
- Be able to predict problems and implement solutions that detect and prevent outages.
- Mentor/manage a team of engineers.
- Design solutions with failure scenarios in mind to ensure reliability.
- Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
- Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
- Automate the build and testing processes to consistently integrate code
- Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
Rapidly growing fintech SaaS firm that propels business grow
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
at 6sense
The Company:
It’s no surprise that 6sense is named a top workplace year after year — we have industry-leading technology developed and taken to market by a world-class team. 6sense is Top Rated on Glassdoor with a 4.9/5 and our CEO Jason Zintak was recognized as the #1 CEO in the small & medium business category by Glassdoor’s https://www.glassdoor.com/Award/Top-CEOs-at-SMBs-LST_KQ0%2C16.htm">2021 Top CEO Employees Choice Awards.
In 2021, the company was recognized for having the Best Company for Diversity, Best Company for Women, Best CEO, Best Company Culture, Best Company Perks & Benefits and Happiest Employees from the employee feedback platform Comparably. In addition, 6sense has also won several accolades that demonstrate its reputation as an employer of choice including the Glassdoor Best Place to Work (2022), TrustRadius Tech Cares (2021) and Inc. Best Workplaces (2022, 2021, 2020, 2019).
6sense reinvents the way organizations create, manage, and convert pipeline to revenue. The 6sense Revenue AI captures anonymous buying signals, predicts the right accounts to target at the ideal time, and recommends the channels and messages to boost revenue performance. Removing guesswork, friction and wasted sales effort, 6sense empowers sales, marketing, and customer success teams to significantly improve pipeline quality, accelerate sales velocity, increase conversion rates, and grow revenue predictably.
Senior Software Engineer - Infrastructure, Cloud
Responsibilities:
Develop and deploy services to improve the availability, ease of use/management, and visibility of 6sense systems
Building and scaling out our services and infrastructure
Learning and adopting technologies that may aide in solving our challenges
Own our critical underlying systems like AWS, Kubernetes, Mesos, infrastructure deployment, and compute cluster architecture (which services frameworks and engines like Hadoop/Hive/Presto)
Write/review/debug production code, develop documentation and capacity plans, and debug live production problems Contributing back to open-source projects if we need to add or patch functionality
Support the overall Software Engineering team to resolve any issues they encounter
Minimum Qualifications:
5+ years of experience with Linux/Unix system administration and networking fundamentals 3+ years in a Software Engineering role or equivalent experience
4+ years of working with AWS
4+ years of experience working with Kubernetes, Docker.
Strong skills in reading code as well as writing clean, maintainable, and scalable code
Good knowledge of Python
Experience designing, building, and maintaining scalable services and/or service-oriented architecture
Experience with high-availability
Experience with modern configuration management tools (e.g. Ansible/AWX, Chef, Puppet, Pulumi) and idempotency
Bonus Requirements:
Knowledge of standard security practices
Knowledge of the Hadoop ecosystem (e.g. Hadoop, Hive, Presto) including deployment, scaling, and maintenance Experience with operating and maintaining VPN/SSH/ZeroTrust access infrastructure
Experience with CDNs such as CloudFront and Akamai
Good knowledge of Javascript, Java, Golang
Exposure to modern build systems such as Bazel, Buck, or Pants#LI-remote
Every person in every role at 6sense owns a part of defining the future of our industry-leading technology. You’ll join a team where curiosity is prized, no one’s satisfied with the status quo, and everyone’s all-in on the collective good.6sense is a place where difference-makers roll up their sleeves, take risks, act with integrity, and measure successby the value we create for our customers.
We want 6sense to be the best chapter of your career.
Feel part of something
You’ll be part of building tomorrow’s tech, revolutionizing how marketing and sales teams create, manage, and convert pipeline to revenue. And you’ll be seen and appreciated by co-workers who challenge you, cheer you on, and always have your back.
At 6sense, you’ll experience the passion from customers and colleagues alike for our market-leading vision, and you're entrusted with applying your unique talents to help bring that vision to life.
Build a career
As part of a company on a rocketship trajectory, there’s no way around it: You’re going to experience unparalleled career growth. With colleagues as humble and hungry as you are, and a leadership philosophy grounded in trust, transparency, and empowerment, every day is a chance to improve on the one before.
Enjoy access to our Udemy Training Library with 5,000+ courses, give and get recognition from your coworkers, and spend time with our executive team every two weeks in our All Hands gathering to connect, learn and ask leaders about whatever is on your mind.
Enjoy work, and your life
This is a place where you’ll do your best work and inspire others to do theirs — where you’re guaranteed to make real connections, for life, along the way.
We want to help you prioritize health and wellness, today and tomorrow. Take advantage of family medical coverage; a monthly stipend to support your physical, mental, and financial wellness; generous paid parental leave benefits; Plus, we have an open time-off policy, so you can take the time you need.
Set for success
A vision as big as ours only comes to life when we’re all winning together.
We’ll make sure you have the equipment you need to work at home or in one of our offices. And have the right snacks, pens or lighting with our work-from-home expense reimbursement allowance. We also partner with WeWork to make sure that if your choice is a hybrid of home and office, we have you covered in the locations they’re offered.
That’s the commitment we make to every one of our employees. If this sounds like a place where you'll thrive as you take your success to the next level, let’s chat!
Profile Description:
The job holder will work with developers and the IT staff to oversee the code releases, combining an understanding of both engineering and coding. From creating and implementing systems software to analyzing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.
Key Responsibilities:
- Understanding customer requirements and project KPIs
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Managing stakeholders and external interfaces
- Setting up tools and required infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Have the technical skill to review, verify, and validate the software code developed in the project.
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
- Incidence management and root cause analysis
- Coordination and communication within the team and with customers
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
- Managing periodic reporting on the progress to the management and the customer
Product based company specializes into architectural product
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
- Minimum 3+ yrs of Experience in DevOps with AWS Platform
- • Strong AWS knowledge and experience
- • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
- • Experience with IAC tools Terraform
- • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
- • Significant experience with Linux operating system environments
- • Experience with infrastructure scripting solutions such as Python/Shell scripting
- • Must have experience in designing Infrastructure automation framework.
- • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
- • Excellent problem-solving, Log Analysis and troubleshooting skills
- • Experience in setting up centralized logging for system (EKS, EC2) and application
- • Process-oriented with great documentation skills
- • Ability to work effectively within a team and with minimal supervision
Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.
With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
- Providing on-call support within a high availability production environment
- Logging issues
- Providing Complex problem analysis and resolution for technical and application issues
- Supporting and collaborating with team members
- Running system updates
- Monitoring and responding to system alerts
- Developing and running system health checks
- Applying industry standard practices across the technology estate
- Performing system reviews
- Reviewing and maintaining infrastructure configuration
- Diagnosing performance issues and network bottlenecks
- Collaborating within geographically distributed teams
- Supporting software development infrastructure by continuous integration and delivery standards
- Working closely with developers and QA teams as part of a customer support centre
- Projecting delivery work, either individually or in conjunction with other teams, external suppliers or contractors
- Ensuring maintenance of the technical environments to meet current standards
- Ensuring compliance with appropriate industry and security regulations
- Providing support to Development and Customer Support teams
- Managing the hosted infrastructure through vendor engagement
- Managing 3rd party software licensing ensuring compliance
- Delivering new technologies as agreed by the business
What you need to have:
- Experience working within a technical operations environment relevant to associated skills stated.
- Be proficient in:
- Linux, zsh/ bash/ similar
- ssh, tmux/ screen/ similar
- vim/ emacs/ similar
- Computer networking
- Have a reasonable working knowledge of:
- Cloud infrastructure, Preferably GCP
- One or more programming/ scripting languages
- Git
- Docker
- Web services and web servers
- Databases, relational and NoSQL
- Some familiarity with:
- Puppet, ansible
- Terraform
- GitHub, CircleCI , Kubernetes
- Scripting language- Shell
- Databases: Cassandra, Postgres, MySQL or CloudSQL
- Agile working practices including scrum and Kanban
- Private & public cloud hosting environments
- Strong technology interests with a positive ‘can do’ attitude
- Be flexible and adaptable to changing priorities
- Be good at planning and organising their own time and able to meet targets and deadlines without supervision
- Excellent written and verbal communication skills.
- Approachable with both colleagues and team members
- Be resourceful and practical with an ability to respond positively and quickly to technical and business challenges
- Be persuasive, articulate and influential, but down to earth and friendly with own team and colleagues
- Have an ability to establish relationships quickly and to work effectively either as part of a team or singularly
- Be customer focused with both internal and external customers
- Be capable of remaining calm under pressure
- Technically minded with good problem resolution skills and systematic manner
- Excellent documentation skills
- Prepared to participate in out of hours support rota
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
Devops Engineer
Roles and Responsibilities:
As a DevOps Engineer, you’ll be responsible for ensuring that our products can be seamlessly deployed on infrastructure, whether it is on-prem or on public clouds.
- Create, Manage and Improve CI / CD pipelines to ensure our Platform and Applications can be deployed seamlessly
- Evaluate, Debug, and Integrate our products with various Enterprise systems & applications
- Build metrics, monitoring, logging, configurations, analytics and alerting for performance and security across all endpoints and applications
- Build and manage infrastructure-as-code deployment tooling, solutions, microservices and support services on multiple cloud providers and on-premises
- Ensure reliability, availability and security of our infrastructure and products
- Update our processes and design new processes as needed to optimize performance
- Automate our processes in compliance with our security requirements
- Manage code deployments, fixes, updates, and related processes
- Manage environment where we deploy our product to multiple clouds that we control as well as to client-managed environments
- Work with CI and CD tools, and source control such as GIT and SVN. DevOps Engineer
Skills/Requirements:
- 2+ years of experience in DevOps, SRE or equivalent positions
- Experience working with Infrastructure as Code / Automation tools
- Experience in deploying, analysing, and debugging on multiple environments (AWS, Azure, Private Clouds, Data Centres, etc), Linux/Unix administration, Databases such as MySQL, PostgreSQL, NoSQL, DynamoDB, Cosmos DB, MongoDB, Elasticsearch and Redis (both managed instances as well as self-installed).
- Knowledge of scripting languages such as Python, PowerShell and / or Bash.
- Hands-on experience with the following is a must: Docker, Kubernetes, ELK Stack
- Hands-on experience with at least three of the following- Terraform, AWS Cloud Formation, Jenkins, Wazuh SIEM, Ansible, Ansible Tower ,Puppet ,Chef
- Good troubleshooting skills with the ability to spot issues.
- Strong communication skills and documentation skills.
- Experience with deployments with Fortune 500 or other large Global Enterprise clients is a big plus
- Experience with participating in an ISO27001 certification / renewal cycle is a plus.
- Understanding of Information Security fundamentals and compliance requirements
Work From Home
Start Up Background is preferred
Company Location: Noida
at One Championship
As part of the engineering team, you would be expected to have
deep technology expertise with a passion for building highly scalable products.
This is a unique opportunity where you can impact the lives of people across 150+
countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability,
performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure
services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and
restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps
engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using
Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager,
Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure
Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
They provide both wholesale and retail funding. PM1
- 7-10 years experience with secure SDLC/DevSecOps practices such as automating security processes within CI/CD pipeline.
- At least 4 yrs. experience designing, and securing Data Lake & Web applications deployed to AWS, Azure, Scripting/Automation skills on Python, Shell, YAML, JSON
- At least 4 years of hands-on experience with software development lifecycle, Agile project management (e.g. Jira, Confluence), source code management (e.g. Git), build automation (e.g. Jenkins), code linting and code quality (e.g. SonarQube), test automation (e.g. Selenium)
- Hand-on & Solid understanding of Amazon Web Services & Azure-based Infra & applications
- Experience writing cloud formation templates, Jenkins, Kubernetes, Docker, and microservice application architecture and deployment.
- Strong know-how on VA/PT integration in CI/CD pipeline.
- Experience in handling financial solutions & customer-facing applications
Roles
- Accelerate enterprise cloud adoption while enabling rapid and stable delivery of capabilities using continuous integration and continuous deployment principles, methodologies, and technologies
- Manage & deliver diverse cloud [AWS, Azure, GCP] DevSecOps journeys
- Identify, prototype, engineer, and deploy emerging software engineering methodologies and tools
- Maximize automation and enhance DevSecOps pipelines and other tasks
- Define and promote enterprise software engineering and DevSecOps standards, practices, and behaviors
- Operate and support a suite of enterprise DevSecOps services
- Implement security automation to decrease the loop between the development and deployment processes.
- Support project teams to adopt & integrate the DevSecOps environment
- Managing application vulnerabilities, Data security, encryption, tokenization, access management, Secure SDLC, SAST/DAST
- Coordinate with development and operations teams for practical automation solutions and custom flows.
- Own DevSecOps initiatives by providing objective, practical and relevant ideas, insights, and advice.
- Act as Release gatekeeper with an understanding of OWASP top 10 lists of vulnerabilities, NIST SP-800-xx, NVD, CVSS scoring, etc concepts
- Build workflows to ensure a successful DevSecOps journey for various enterprise applications.
- Understand the strategic direction to reach business goals across multiple projects & teams
- Collaborate with development teams to understand project deliverables and promote DevSecOps culture
- Formulate & deploy cloud automation strategies and tools
Skills
- Knowledge of the DevSecOps culture and principles.
- An understanding of cloud technologies & components
- A flair for programming languages such as Shell, Python, Java Scripts,
- Strong teamwork and communication skills.
- Knowledge of threat modeling and risk assessment techniques.
- Up-to-date knowledge of cybersecurity threats, current best practices, and the latest software.
- An understanding of programs such as Puppet, Chef, ThreatModeler, Checkmarx, Immunio, and Aqua.
- Strong know-how of Kubernetes, Docker, AWS, Azure-based deployments
- On the job learning for new programming languages, automation tools, deployment architectures
A digital business enablement MNC
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure
A fast-growing SaaS commerce company based in Bangalore
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
As a Scala Developer, you are part of the development of the core applications using the Micro Service paradigm. You will join an Agile team, working closely with our product owner, building and delivering a set of Services as part of our order management and fulfilment platform. We deliver value to our business with every release, meaning that you will immediately be able to contribute and make a positive impact.
Our approach to technology is to use the right tool for the job and, through good software engineering practices such as TDD and CI/CD, to build high-quality solutions that are built with a view to maintenance.
Requirements
The Role:
- Build high-quality applications and HTTP based services.
- Work closely with technical and non-technical colleagues to ensure the services we build meet the needs of the business.
- Support development of a good understanding of business requirements and corresponding technical specifications.
- Actively contribute to planning, estimation and implementation of team work.
- Participate in code review and mentoring processes.
- Identify and plan improvements to our services and systems.
- Monitor and support production services and systems.
- Keep up with industry trends and new tools, technologies & development methods with a view to adopting best practices that fit the team and promote adoption more widely.
Relevant Skills & Experience:
The following skills and experience are relevant to the role and we are looking for someone who can hit the ground running in these areas.
- Web service application development in Scala (essential)
- Functional Programming (essential)
- API development and microservice architecture (essential)
- Patterns for building scalable, performant, distributed systems (essential)
- Databases – we use PostgreSQL (essential)
- Common libraries – we use Play, Cats and Slick (essential)
- Strong communication and collaboration skills (essential)
- Performance profiling and analysis of JVM based applications
- Messaging frameworks and patterns
- Testing frameworks and tools
- Docker, virtualisation and cloud computing – we use AWS and Vmware
- Javascript including common frameworks such as React, Angular, etc
- Linux systems administration
- Configuration tooling such as Puppet and Ansible
- Continuous delivery tools and environments
- Agile software delivery
- Troubleshooting and diagnosing complex production issues
Benefits
- Fun, happy and politics-free work culture built on the principles of lean and self organisation.
- Work with large scale systems powering global businesses.
- Competitive salary and benefits.
Note: We looking for immediate joiners. We expect the offered candidate should join within 15 days. Buyout reimbursement is available for 30 to 60 days notice period applicants who can ready join within 15 days.
To build on our success, we are looking for smart, conscientious software developers who want to work in a friendly, engaging environment and take our platform and products forward. In return, you will have the opportunity to work with the latest technologies, frameworks & methodologies in service development in an environment where we value collaboration and learning opportunities.
Our client company is into Hospitality. (TH1)
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
- Having around 10+ years of Experience in IT industry in Software Development.
- Having strong Python full stack developer.
- Having work Experience in OpenStack,Ansible, Shellscript, Chef, Puppet, Docker,ELK,OpenTSDB, Kafka, Zookeeper, Grafan
- Having work experience in SDN/NFV
- Having work experience in AWS,AZURE Cloud
- Experience in Python, HTML, CSS
- Having work experience in Open source and Open Flow Controller(SDN).
- Having work experience in Zabbix, Nagiso, OpenNMs Monitoring tool.
- Experience in Aglie methodology.
- Having work experience in Type1&Type2 Hypervisors and KVM.
- Having good knowledge on OOP'S concepts and Django ORM.
- Good Knowledge on MySQL ,Postgresql, HDFS, Timeseries
- Basic Knowledge on Javascript and Jquery.
- Good Knowledge on ONOS,OpenKilda,Mininet.
- Having work experience in SDN/NFV, Orchestration
- Having work experience in Supply Chain Management system.
- Having work experience in MVT/MVC architecture.
- Having good knowledge networks, devices, service modeling and automation in systems.
- Having work experience on API & JSON implementation.
- Good Understanding of Software Development (i.e. SDLC)
- Good team player enthusiastic and quick learner
- Good interpersonal skill, commitment, result oriented with a quest and learn new technologies and understanding challenging tasks
- Having around 8+ years of Experience in IT industry in Software Development.
- Sound knowledge in Core Java
- Having work experience in SDN/NFV, Orchestration
- Having work experience in Open source and Open Flow Controller(SDN).
- Experience in Aglie methodology.
- Good Knowledge on MySQL ,Postgresql or any Timeseries DB,Kafka, Zookeeper
- Good Knowledge on ONOS, ODL (OpenDaylight) OpenKilda,Mininet.
- Having work experience in MVT/MVC architecture.
- Having good knowledge networks, devices, service modeling and automation in systems.
- Having work experience on API & JSON implémentation.
- Knowledge in OpenStack,Ansible, Shellscript, Chef, Puppet,
- Good Understanding of Software Development (i.e. SDLC)
- Good team player enthusiastic and quick learner
- Good interpersonal skill, commitment, result oriented with a quest and learn new technologies and understanding challenging tasks
- Knowledge in AWS, AZURE Cloud
at Goodera
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.
Job Responsibilities:
This role requires you to work on Linux systems and their associated services which provide the capability for IG to run their trading platform. Team responsibilities include daily troubleshooting and resolution of incidents, operational maintenance, and support for proactive and preventative analysis of Production and Development systems.
- Managing the Linux Infrastructure and web technologies
- Patching and upgrades of Redhat Linux OS and server firmware.
- General Redhat Linux system administration and networking.
iii. Troubleshooting and Issue Resolution of OS and network stack incidents.
iv. Configurations management using puppet and version control.
v. Systems monitoring and availability.
vi. Web applications and application routing.
vii. Web-site infrastructure, content delivery, and security. - Day to day responsibilities will include: Completing service requests, responding to Incidents and Problems as they arise as well as providing day to day support and troubleshooting for Production and Development systems.
3. Create a run book of operational processes and follow a support matrix of products.
4. Ensuring Internal Handovers are completed, and all OS documentation is updated.
5. Troubleshoot system issues, plan for future capacity, and monitor systems performance.
6. Proactive monitoring of the Linux platform and ownership of these tools/dashboards.
7. Work with the delivery and engineering teams to develop the platform and technologies, striving to automate where possible.
8. Continuously improve the team, tools, and processes, support regular agile releases of applications and architectural improvements.
9. The role includes participating in a team Rota to provide out-of-hours support.
Person Specification:
Ability / Expertise
This position is suited to an engineer with at least 8 years of Redhat Linux / Centos Systems Administration experience that is looking to broaden their range of technologies and work using modern tools and techniques.
We are looking for someone with the right attitude: -
Eager to learn new technologies, tools, and techniques alongside applying their existing skills and judgment.
• Pragmatic approach to balancing different work priorities such as incidents, requests and
Page Break troubleshooting.
- Can do/Proactive in improving the environments around them.
• Sets the desired goal and the plans to achieve it.
• Proud of their achievements and keen to improve further.
This will be a busy role in a team so the successful candidate’s behaviors will need to strongly align with our values:
• Champion the client: customer service is a passion, cultivates trust, has clarity and communicates well, works with pace and momentum
• Lead the way: innovative and resilient, strong learning agility and curiosity
• Love what we do: Conscientiousness - has high self-discipline, carefulness, thoroughness, and organization, Flexible and adaptable
The successful candidate will be able to relate to the statements above and give examples that back them up. We believe that previous achievements signpost a good fit at IG.
Qualifications
Essential:
• At least 4 years’ Systems Administration experience with Redhat Enterprise Linux / Centos 5/6/7 managed through a Satellite infrastructure.
• Managed an estate of 1000+ hosts and performed general system administration, networking, backup, and restore monitoring and troubleshooting functions on that estate.
• 1 Years of experience with scripting languages (bash/Perl/Ruby) and automating tasks with Puppet and Redhat Satellite. Experience with custom RPM generation.
• Strong analytical and troubleshooting skills. You will have resolved complex systems issues in your last role and have a solid understanding of the tools needed to do so.
• Excellent Communication (Listening, speaking, the transmission of concepts with/without examples, etc).
• Calm under pressure and work to tight deadlines. You will have brought critical production systems back to life.
Roles and Responsibilities
- Managing Availability, Performance, Capacity of infrastructure and applications.
- Building and implementing observability for applications health/performance/capacity.
- Optimizing On-call rotations and processes.
- Documenting “tribal” knowledge.
- Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure
- Providing help in onboarding new services with production readiness review process.
- Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
- Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with Dev team to have in depth understanding of the application architecture
and its bottlenecks.
- Identifying observability gaps in product services, infrastructure and working with stake
owners to fix it.
- Managing Outages and doing detailed RCA with developers and identifying ways to
avoid that situation.
- Managing/Automating upgrades of the infrastructure services.
- Automate toil work.
Experience & Skills
- 6+ years of total experience
- Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
- A collaborative spirit with the ability to work across disciplines to influence, learn, and
deliver.
- A deep understanding of computer science, software development, and networking principles.
- Demonstrated experience with languages, such as Python, Java, Golang etc.
- Extensive experience with Linux administration and good understanding the various
linux kernel subsystems (memory, storage, network etc).
- Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and
- Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure
solutions like Microsoft Azure or Google Cloud.
- Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker,
Argo etc.
- Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
Experience in scripting&buidling required automation using puppet
Build and Development of CI/CD pipelines
Wrote maintained, reviewed and documented modules and GIT repositories for puppet enterprise on RHEL
- Mandatory: Docker, AWS, Linux, Kubernete or ECS
- Prior experience provisioning and spinning up AWS Clusters / Kubernetes
- Production experience to build scalable systems (load balancers, memcached, master/slave architectures)
- Experience supporting a managed cloud services infrastructure
- Ability to maintain, monitor and optimise production database servers
- Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.)
- Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc)
- Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.)
- In-depth knowledge on Linux Environment.
- Prior experience leading technical teams through the design and implementation of systems infrastructure projects.
- Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred)
- Experience in handling large production deployments and infrastructure.
- DevOps based infrastructure and application deployments experience.
- Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets
- Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints
- He/she should be able to validate that the environment meets all security and compliance controls.
- Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform.
- Proven written and verbal communication skills.
- Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements.
- Previous NOC experience.
- Client Facing Experience with excellent Customer Communication and Documentation Skills
● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.