Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
Similar jobs
Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
Job Description:
We are looking to recruit engineers with zeal to learn cloud solutions using Amazon Web Services (AWS). We\'ll prefer an engineer who is passionate about AWS Cloud technology, passionate about helping customers succeed, passionate about quality and truly enjoys what they do. The qualified candidate for AWS Cloud Engineer position is someone who has a can-do attitude and is an innovative thinker.
- Be a hands on with responsibilities for the installation, configuration, and ongoing management of Linux based solutions on AWS for our clients.
- Responsible for creating and managing Autoscaling EC2 instances using VPCs, Elastic Load Balancers, and other services across multiple availability zones to build resilient, scalable and failsafe cloud solutions.
- Familiarity with other AWS services such as CloudFront, ALB, EC2, RDS, Route 53 etc. desirable.
- Working Knowledge of RDS, Dynamo DB, Guard Duty, WAF, Multi tier architecture.
- Proficient in working on Git, CI CD Pipelined, AWS Devops, Git, Bit Bucket, Ansible.
- Proficient in working on Docker Engine, Containers, Kubernetes .
- Expertise in Migration workload to AWS from different cloud providers
- Should be versatile in problem solving and resolve complex issues ranging from OS and application faults to creatively improving solution design
- Should be ready to work in rotation on a 24x7 schedule, and be available on call at other times due to the critical nature of the role
- Fault finding, analysis and of logging information for reporting of performance exceptions
- Deployment, automation, management, and maintenance of AWS cloud-based production system.
- Ensuring availability, performance, security, and scalability of AWS production systems.
- Management of creation, release, and configuration of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Pre-production acceptance testing for quality assurance.
- Provision of critical system security by leveraging best practices and prolific cloud security solutions.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Designing, maintenance and management of tools for automation of different operational processes.
Desired Candidate Profile
o Customer oriented personality with good communication skills, who is able to articulate and communicate very effectively verbally as well as in written communications.
o Be a team player that collaborates and shares experience and expertise with the rest of the team.
o Understands database system such as MSSQL, Mongo DB, MySQL, MariaDB, Dynamo DB, RDS.
o Understands Web Servers such as Apache, Ningx.
o Must be RHEL certified.
o In depth knowledge of Linux Commands and Services.
o Efficiency enough to manage all internet applications inclusive FTP, SFTP, Ningx Apache, MySQL, PHP.
o Good communication skill.
o Atleast 3-7 Years of experience in AWS and Devops.
Company Profile:
i2k2 Networks is a trusted name in the IT cloud hosting services industry. We help enterprises with cloud migration, cost optimization, support, and fully managed services which helps them to move faster and scale with lower IT costs. i2k2 Networks offers a complete range of cutting-edge solution that drives the Internet-powered business modules. We excel in:
- Managed IT Services
- Dedicated Web Servers Hosting
- Cloud Solutions
- Email Solutions
- Enterprise Services
- Round the clock Technical Support
https://www.i2k2.com/">https://www.i2k2.com/
Regards
Nidhi Kohli
i2k2 Networks Pvt Ltd.
AM - Talent Acquisition
Requirements
Core skills:
● Strong background in Linux / Unix Administration and
troubleshooting
● Experience with AWS (ideally including some of the following:
VPC, Lambda, EC2, Elastic Cache, Route53, SNS, Cloudwatch,
Cloudfront, Redshift, Open search, ELK etc.)
● Experience with Infra Automation and Orchestration tools
including Terraform, Packer, Helm, Ansible.
● Hands on Experience on container technologies like Docker,
Kubernetes/EKS, Gitlab and Jenkins as Pipeline.
● Experience in one or more of Groovy, Perl, Python, Go or
scripting experience in Shell.
● Good understanding of with Continuous Integration(CI) and
Continuous Deployment(CD) pipelines using tools like Jenkins,
FlexCD, ArgoCD, Spinnaker etc
● Working knowledge of key value stores, database technologies
(SQL and NoSQL), Mongo, mySQL
● Experience with application monitoring tools like Prometheus,
Grafana, APM tools like NewRelic, Datadog, Pinpoint
● Good exposure on middleware components like ELK, Redis, Kafka
and IOT based systems including Redis, NewRelic, Akamai,
Apache / Nginx, ELK, Grafana, Prometheus etc
Good to have:
● Prior experience in Logistics, Payment and IOT based applications
● Experience in unmanaged mongoDB cluster, automations &
operations, analytics
● Write procedures for backup and disaster recovery
Core Experience
● 3-5 years of hands-on DevOps experience
● 2+ years of hands-on Kubernetes experience
● 3+ years of Cloud Platform experience with special focus on
Lambda, R53, SNS, Cloudfront, Cloudwatch, Elastic Beanstalk,
RDS, Open Search, EC2, Security tools
● 2+ years of scripting experience in Python/Go, shell
● 2+ years of familiarity with CI/CD, Git, IaC, Monitoring, and
Logging tools
Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
processes.
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.
Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk • DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus
BlueOptima’s vision is to become the global reference for the optimisation of the performance of Software Engineers across all industries. We provide industry-leading objective metrics in software development. We enable large organisations to deliver better software, faster and at lower cost, with technology that pushes the limits of what has been done before.
We are a global company which has consistently doubled in headcount and revenue YoY, with no external investment. We currently are located in 4 countries: London (our HQ), Mexico, India and the US. A total number of 250+ employees (and increasing every day) from 34 different nationalities and with over 25 languages spoken.
We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment.
Location: Bangalore
Department: DevOps
Job Summary:
We are looking for skilled and talented engineers to join our Platform team and directly contribute to Continuous Delivery, and improve the state of art in CI/CD and Observability within BlueOptima.
As a Senior DevOps Engineer, you will define and outline CI/CD related aspects and collaborate with application teams on imparting training and enforcing best practices to follow for CI/CD and also directly implement, maintain, and consult on the observability and monitoring framework that supports the needs of multiple internal stakeholders.
Your team: The Platform team in BlueOptima works across Product lines and is responsible for providing a scalable technology platform which is used by the Product team to build their application, improve performance of it, or even improve the SDLC by improving the application delivery pipeline, etc.
Platform team is also responsible for driving technology adoption across the product development team. The team works on components that are common across product lines like IAM (Identity & Access Management), Auto Scaling, APM (Application Performance Monitoring) and CI/CD, etc
Responsibilities and tasks:
- Define & Outline of CI/CD and related aspects
- Own & Improve state of build process to reduce manual intervention
- Own & Improve state of deployment to make it 100% automated
- Define guidelines and standards of automated testing required for a good CI/CD pipeline, ensures alignment on an ongoing basis (includes artifacts generation, promotions, etc)
- Automating Deployment and Roll back into Production Environment.
- Collaborate with engineering teams, application developers, management and infrastructure teams to assess near- and long-term monitoring needs and provide them with Tooling to improve observability of application in production.
- Keep an eye on the emerging observability tools, trends and methodologies, and continuously enhance our existing systems and processes.
- Ability to choose the right set of tools for a given problem and apply that to all the applications which are available
- Collaborate with the application team for following
- Define and enforce logging standard
- Define metrics applications should track and provide support to application teams visualise same on Grafana (or similar tools)
- Define alerts for application health monitoring in Production
- Tooling like APM, E2E, etc
- Continuously improve the state of the art on above
- Assist in scheduling and hosting regular tool training sessions to better enable tool adoption and best practices, also making sure training materials are maintained.
Qualifications
What You Need to Succeed at BlueOptima:
- Minimum bachelor's degree in Computer Science or equivalent
- Demonstrable years of experience with implementation, operations, maintenance of IT systems and/or administration of software functions in multi-platform and multi-system environments.
- At least 1 year of experience leading or mentoring a small team.
- Demonstrable experience having developed containerized application components, using docker or similar solutions in previous roles
- Have extensive experience with metrics and logging libraries and aggregators, data analysis and visualization tools.
- Experience in defining, creating, and supporting monitoring dashboards
- 2+ Years of Experience with CI tools and building pipelines using Jenkins.
- 2 + Years of Experience with monitoring and observability tools and methodology of products such as; Grafana, Prometheus, ElasticSearch, Splunk, AppDynamics, Dynatrace, Nagios, Graphite ,Datadog etc.
- Ability to write and read simple scripts using Python / Shell Scripts.
- Familiarity with configuration languages such as Ansible.
- Ability to work autonomously with minimum supervision
- Demonstrate strong oral and written communication skill
Additional information
Why join our team?
Culture and Growth:
- Global team with a creative, innovative and welcoming mindset.
- Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success.
- Freedom to create your own success story in a high-performance environment.
- Training programs and Personal Development Plans for each employee
Benefits:
- 32 days of holidays - this includes public and religious holidays
- Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed
- Private Medical Insurance provided by the company
- Gratuity payments
- Claim Mobile/Internet expenses and Professional Development costs
- Leave Travel Allowance
- Flexible Work from Home policy - 2 days home p/w
- International travel opportunities
- Global annual meet up (most recent meetups have been held in Cancun and India Thailand, Oct 2022.
- High quality equipment (Ergonomic chairs and 32’ screens)
- Pet friendly offices
- Creche Policy for working parents.
- Paternity and Maternity leave.
Stay connected with us on https://www.linkedin.com/company/blueoptima">LinkedIn or keep an eye on our https://www.blueoptima.com/careers">career page for future opportunities!
JOB RESPONSIBILITIES:
- Responsible for design, implementation, and continuous improvement on automated CI/CD infrastructure
- Displays technical leadership and oversight of implementation and deployment planning, system integration, ongoing data validation processes, quality assurance, delivery, operations, and sustainability of technical solutions
- Responsible for designing topology to meet requirements for uptime, availability, scalability, robustness, fault tolerance & security
- Implement proactive measures for automated detection and resolution of recurring operational issues
- Lead operational support team manage incidents, document root cause and tracking preventive measures
- Identifying and deploying cybersecurity measures by continuously validating/fixing vulnerability assessment reports and risk management
- Responsible for the design and development of tools, installation procedures
- Develops and maintains accurate estimates, timelines, project plans, and status reports
- Organize and maintain packaging and deployment of various internal modules and third-party vendor libraries
- Responsible for the employment, timely performance evaluation, counselling, employee development, and discipline of assigned employees.
- Participates in calls and meetings with customers, vendors, and internal teams on regular basis.
- Perform infrastructure cost analysis and optimization
SKILLS & ABILITIES
Experience: Minimum of 10 years of experience with good technical knowledge regarding build, release, and systems engineering
Technical Skills:
- Experience with DevOps toolchains such as Docker, Rancher, Kubernetes, Bitbucket
- Experience with Apache, Nginx, Tomcat, Prometheus ,Grafana
- Ability to learn/use a wide variety of open-source technologies and tools
- Sound understanding of cloud technologies preferably AWS technologies
- Linux, Windows, Scripting, Configuration Management, Build and Release Engineering
- 6 years of experience in DevOps practices, with a good understanding of DevOps and Agile principles
- Good scripting skills (Python/Perl/Ruby/Bash)
- Experience with standard continuous integration tools Jenkins/Bitbucket Pipelines
- Work on software configuration management systems (Puppet/Chef/Salt/Ansible)
- Microsoft Office Suite (Word, Excel, PowerPoint, Visio, Outlook) and other business productivity tools
- Working knowledge on HSM and PKI (Good to have)
Location:
- Bangalore
Experience:
- 10 + Years.
- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
Job Description
Please connect me on Linkedin or share your Resume on shrashti jain
• 8+ years of overall experience and relevant of at least 4+ years. (Devops experience has be more when compared to the overall experience)
• Experience with Kubernetes and other container management solutions
• Should have hands on and good understanding on DevOps tools and automation framework
• Demonstrated hands-on experience with DevOps techniques building continuous integration solutions using Jenkins, Docker, Git, Maven
• Experience with n-tier web application development and experience in J2EE / .Net based frameworks
• Look for ways to improve: Security, Reliability, Diagnostics, and costs
• Knowledge of security, networking, DNS, firewalls, WAF etc
• Familiarity with Helm, Terraform for provisioning GKE,Bash/shell scripting
• Must be proficient in one or more scripting languages: Unix Shell, Perl, Python
• Knowledge and experience with Linux OS
• Should have working experience with monitoring tools like DataDog, Elk, and/or SPLUNK, or any other monitoring tools/processes
• Experience working in Agile environments
• Ability to handle multiple competing priorities in a fast-paced environment
• Strong Automation and Problem-solving skills and ability
• Experience of implementing and supporting AWS based instances and services (e.g. EC2, S3, EBS, ELB, RDS, IAM, Route53, Cloudfront, Elasticache).
•Very strong hands with Automation tools such Terraform
• Good experience with provisioning tools such as Ansible, Chef
• Experience with CI CD tools such as Jenkins.
•Experience managing production.
• Good understanding of security in IT and the cloud
• Good knowledge of TCP/IP
• Good Experience with Linux, networking and generic system operations tools
• Experience with Clojure and/or the JVM
• Understanding of security concepts
• Familiarity with blockchain technology, in particular Tendermint
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************