Ansible Jobs in Pune
Requirement
- 1 to 7 years of experience with relative experience in managing development operations
- Hands-on experience with AWS
- Thorough knowledge of setting up release pipelines, and managing multiple environments like Beta, Staging, UAT, and Production
- Thorough knowledge of best cloud practices and architecture
- Hands-on with benchmarking and performance monitoring
- Identifying various bottlenecks and taking pre-emptive measures to avoid downtime
- Hands-on knowledge with at least one toolset Chef/Puppet/Ansible
- Hands-on with CloudFormation / Terraform or other Infrastructure as code is a plus.
- Thorough experience with Shell Scripting and should not know to shy away from learning new technologies or programming languages
- Experience with other cloud providers like Azure and GCP is a plus
- Should be open to R&D for creative ways to improve performance while keeping costs low
What do we want the person to do?
- Manage, Monitor and Provision Infrastructure - Majorly on AWS
- Will be responsible for maintaining 100% uptime on production servers (Site Reliability)
- Setting up a release pipeline for current releases. Automating releases for Beta, Staging & Production
- Maintaining near-production replica environments on Beta and Staging
- Automating Releases and Versioning of Static Assets (Experience with Chef/Puppet/Ansible)
- Should have hands-on experience with Build Tools like Jenkins, GitHub Actions, AWS CodeBuild etc
- Identify performance gaps and ways to fix them.
- Weekly meetings with Engineering Team to discuss the changes/upgrades. Can be related to code issues/architecture bottlenecks.
- Creative Ways of Reducing Costs of Cloud Computing
- Convert Infrastructure Deployment / Provision to Infrastructure as Code for reusability and scaling.
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.
Domains we work in:
Multiple; publishing, retail, banking, networking, social sector, education and many more.
Tech we use
Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks
More details on tech:
You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.
Who should join us:
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start-ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new every day
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large-scale custom-built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, and Docker preferable * Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day-to-day basis
Skills:- J2EE, Spring Boot, Hibernate (Java), Java and Scala
A top of the line, premium software advisory & development services firm. Our customers include promising early stage start ups, fortune 500 enterprises and investors. We draw inspiration from Leonardo Da Vinci's famous quote - Simplicity is the ultimate sophistication.
Domains we work in:
Multiple; publishing, retail, banking, networking, social sector, education and many more.
Tech we use
Java, Scala, Golang, Elixir, Python, RoR, .Net, JS frameworks
More details on tech:
You name it and we might be working on it. The important thing is not technology here but what kind of solutions we provide to our clients. We believe to solve some of the most complex problems, holistic thinking and solution design is of extreme importance. Technology is the most important tool to implement the solution thus designed.
Who should join us:
We are looking for curious & inquisitive technology practitioners. Our customers see us one of the most premium advisory and development services firm, hence most of the problems we work on are complex and often hard to solve. You can expect to work in small (2-5) people teams, working very closely with the customers in iterative developing and evolving the solution. We are continually on the search for passionate, bright and energetic professionals to join our team.
So, if you are someone who has strong fundamentals on technology and wants to stretch, beyond the regular role based boundaries, then Sahaj is the place for you. You will experience a world, where there are no roles or grades and you will play different roles and wear multiple hats, to deliver a software project.
- Work on complex, custom-designed, scalable, multi-tiered software development projects
- Work closely with clients (commercial & social enterprises, start-ups), both Business and Technical staff members * Be responsible for the quality of software and resolving any issues regards the solution
- Think through hard problems, not limited to technology and work with a team to realise and implement solutions
- Learn something new every day
- Development and delivery experience in any of the programming languages
- Passion for software engineering and craftsman-like coding prowess
- Great design and solutioning skills (OO & Functional)
- Experience including analysis, design, coding and implementation of large-scale custom-built object-oriented applications
- Understanding of code refactoring and optimisation issues
- Understanding of Virtualisation & DevOps. Experience with Ansible, Chef, and Docker preferable * Ability to learn new technologies and adapt to different situations
- Ability to handle ambiguity on a day-to-day basis
Skills:- J2EE, Spring Boot, Hibernate (Java), Java and Scala
Position: Site Reliability Engineer
Location: Pune (Currently WFH, post pandemic you need to relocate)
About the Organization:
A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom, and India. You will gain work experience in a global environment.
Job Description:
We are looking for an experienced DevOps / Site Reliability engineer to join our team and be instrumental in taking our products to the next level.
In this role, you will be working on bleeding edge hybrid cloud / on-premise infrastructure handing billions of events and terabytes of data a day.
You will be responsible for working closely with various engineering teams to design, build and maintain a globally distributed infrastructure footprint.
As part of role, you will be responsible for researching new technologies, managing a large fleet of active services and their underlying servers, automating the deployment, monitoring and scaling of components and optimizing the infrastructure for cost and performance.
Day-to-day responsibilities
- Ensure the operational integrity of the global infrastructure
- Design repeatable continuous integration and delivery systems
- Test and measure new methods, applications and frameworks
- Analyze and leverage various AWS-native functionality
- Support and build out an on-premise data center footprint
- Provide support and diagnose issues to other teams related to our infrastructure
- Participate in 24/7 on-call rotation (If Required)
- Expert-level administrator of Linux-based systems
- Experience managing distributed data platforms (Kafka, Spark, Cassandra, etc) Aerospike experience is a plus.
- Experience with production deployments of Kubernetes Cluster
- Experience in automating provisioning and managing Hybrid-Cloud infrastructure (AWS, GCP and On-Prem) at scale.
- Knowledge of monitoring platform (Prometheus, Grafana, Graphite).
- Experience in Distributed storage systems such as Ceph or GlusterFS.
- Experience in virtualisation with KVM, Ovirt and OpenStack.
- Hands-on experience with configuration management systems such as Terraform and Ansible
- Bash and Python Scripting Expertise
- Network troubleshooting experience (TCP, DNS, IPv6 and tcpdump)
- Experience with continuous delivery systems (Jenkins, Gitlab, BitBucket, Docker)
- Experience managing hundreds to thousands of servers globally
- Enjoy automating tasks, rather than repeating them
- Capable of estimating costs of various approaches, and finding simple and inexpensive solutions to complex problems
- Strong verbal and written communication skills
- Ability to adapt to a rapidly changing environment
- Comfortable collaborating and supporting a diverse team of engineers
- Ability to troubleshoot problems in complex systems
- Flexible working hours and ability to participate in 24/7 on call support with other team members whenever required.
- Develop backend services in Go Language
- Write code to handle the scale of thousands of requests per second
- Following best practices of XP/Agile like TDD, SOLID principles, pair programming, etc.
- Deal with cloud-native services and Debug issues on a live setup
- 3 to 8 years Development and delivery experience with Go
- Good knowledge of RESTful API web services and good experience with API Frameworks
- Familiarity with code versioning tools(Git)
- Experience writing scalable solutions and having knowledge on big data tools like Kafka
- Hands-on experience in analysis, design, coding, and implementation of complex, custom-built applications.
- Great Object-Oriented and functional programming skills, including strong design pattern knowledge.
- Familiarity with different databases, like PostgreSQL, MongoDB, Neo4j, etc
- Strong communication and client-facing skills.
- Effectively work and collaborate with teams across different time zones.
- Experience with Microservices, Microservice Principles (Service discovery, API gateways etc) knowledge is a bonus.
- Familiarity with at least one cloud service (Heroku/AWS/GCP/Azure) will be good to have.
- Experience with API security standards and implementation (OAuth).
Regards
Amir Azad
Talent Acquisition
Job Description
DevOps – Technical Skills
- Excellent understanding of at least any one of the programming language Ruby, Python and Java
- Good understanding and hands on experience in Shell/Bash/YAML scripting
- Experience with CI/CD Pipelines and automation processes across different tech stacks and cloud providers (AWS / Azure)
- Experience in Maven and Git workflows
- Hands on experience in working with container orchestration tools such as Docker and Kubernetess
- Good knowledge in any Devops Automation Tools like Chef, Ansible, Terraform, Puppet, Fabric etc
- Experience managing stakeholders and external interfaces and setting up tools and required infrastructure
- Hands on experience in any Cloud infrastructure like AWS or Azure.
- Strong knowledge and hands on experience in Unix OS
- Able to Identify improvements within existing processes to reduce technical debt.
- Experience in network, server, application status monitoring and troubleshooting.
- Possess good problem solving and debugging skills. Troubleshoot issues and coordinate with development teams to streamline builds.
- Exposure to DevSecOps and Agile principles is a plus
- Good communication and inter personal skills
Experience- 5 to 7 Yrs
Designation- Assistant Manager
Location- Pune , Bangalore, Mumbai
Notice Period- Immediate to 1 month
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
- Building and setting up new development tools and infrastructure
- Understanding the needs of stakeholders and conveying this to developers
- Working on ways to automate and improve development and release processes
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and 'fixes'
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Bachelors in Computer Science, Engineering or relevant field
- Experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good knowledge of Python
- Working knowledge of databases such as Mysql,Postgres and SQL
- Problem solving attitude
- Collaborative team spirit
- Detail knowledge of Linux systems (Ubuntu)
- Proficient in AWS console and should have handled the infrastructure of any product (Including dev and prod environments)
Mandatory hands on experience in the following :
- Python based application deployment and maintenance
- NGINX web server
- AWS modules EC2, VPC, EBS, S3
- IAM setup
- Database configurations MySQL, PostgreSQL
- Linux flavoured OS
- Instance/Disaster management
What's the role?
- Work closely with our Development and IT Ops teams to build this infrastructure using the most effective tools and methodologies including a high degree of automation
- Manage the transition between product planning and product deployment adhering to DevOps best practices
- Requirement gathering for cloud infrastructure requirements, evaluate different options and provide solutions
- Working on ways to automate and improve development and release processes
- Deliver insights from production scale data to add SRE values to the team and the product
Responsibilities
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Deploy applications to the cloud, leverage appropriate technology stack for high-availability, fault tolerance and auto-scaling
- Facilitate the development and implementation of DevOps best practices
- Provide operational support to product development teams
- Design, develop and support CI/CD pipelines for our applications
- Contribute new DevOps ideas/methodologies, demonstrate a unique and informed viewpoint about its working implementation
- Performance analysis and capacity planning for optimised infrastructure
- Developing/Documenting processes and monitoring performance metrics of the product ecosystem
- Understanding of Release Management and applying SRE (Site Reliability Engineering) practices for overall system
- Implement and maintain monitors, alarms, and notifications for failure or downtime scenarios
- Balance feature development speed and reliability with well-defined service level objectives
Essential Skills / Experience
- Required
- Extensive experience in DevOps Engineering, SRE, team management, and collaboration
- Experience with Advanced Kubernetes Concepts, Docker, Terraform, Ansible, and Kops
- Exposure to AWS cloud technologies and Experience with Linux systems
- Expertise in bash/python scripting languages
- Advanced Knowledge of SRE concepts, related best practice implementations, and SRE tools
- Preferred
- Knowledge and experience about SaaS product deployment strategies
- Experience with JAVA based SaaS products
Qualification (Required)
- BE. / M.E./ MCA degree in Computer Science or equivalent degree from reputed college/university.
Cloud native technologies - Kubernetes (EKS, GKE, AKS), AWS ECS, Helm, CircleCI, Harness, Severless platforms (AWS Fargate etc.)
Infrastructure as Code tools - Terraform, CloudFormation, Ansible
Scripting - Python, Bash
Desired Skills & Experience:
Projects/Internships with coding experience in either of Javascript, Python, Golang, Java etc.
Hands-on scripting and software development fluency in any programming language (Python, Go, Node, Ruby).
Basic understanding of Computer Science fundamentals - Networking, Web Architecture etc.
Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer, Jenkins etc.
Bonus points if you have contributed to open source projects, participated in competitive coding platforms like Hackerearth, CodeForces, SPOJ etc.
You’re willing to learn various new technologies and concepts. The “cloud-native” field of software is evolving fast and you’ll need to quickly learn new technologies as required.
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
B.E/B.Tech/M.Tech or equivalent experience.
Experience: 6+ years
Location: Pune
Lead/support implementation of core cloud components and document critical design and configuration details to support enterprise cloud initiatives. The Cloud Infrastructure Lead will be primarily responsible for utilizing technical skills to coordinate enhancements and deployment efforts and to provide insight and recommendations for implementing client solutions. Leads will work closely with Customer, Cloud Architect, other Cloud teams and other functions.
Job Requirements:
- Experience in Cloud Foundation setup from the hierarchy of Organization to individual services
- Experience in Cloud Virtual Network, VPN Gateways, Tunneling, Cloud Load Balancing, Cloud Interconnect, Cloud DNS
- Experience working with scalable networking technologies such as Load Balancers/Firewalls and web standards (REST APIs, web security mechanisms)
- Experience working with Identity and access management (MFA, SSO, AD Connect, App Registrations, Service Principals)
- Familiarity with standard IT security practices such as encryption, certificates and key management.
- Experience in Deploying and maintaining Applications on Kubernetes
- Must have worked on one or more configuration tools such an Terraform, Ansible, PowerShell DSC
- Experience on Cloud Storage services including MS SQL DB, Tables, Files etc
- Experience on Cloud Monitoring and Alerts mechanism
- Well versed with Governance and Cloud best practices for Security and Cost optimization
- Experience in one or more of the following: Shell scripting, PowerShell, Python or Ruby.
- Experience with Unix/Linux operating systems internals and administration (e.g., filesystems, system calls) or networking (e.g., TCP/IP, routing, network topologies and hardware, SDN)
- Should have strong knowledge of Cloud billing and understand costing of different cloud services
- Prior professional experience in IT Strategy, IT Business Management, Cloud & Infrastructure, or Systems Engineering
Preferred
- Compute: Infrastructure, Platform Sizing, Consolidation, Tiered and Virtualized Storage, Automated Provisioning, Rationalization, Infrastructure Cost Reduction, Thin Provisioning
- Experience with Operating systems and Software
- Sound background with Networking and Security
- Experience with Open Source: Sizing and Performance Analyses, Selection and Implementation, Platform Design and Selection
- Experience with Infrastructure-Based Processes: Monitoring, Capacity Planning, Facilities Management, Performance Tuning, Asset Management, Disaster Recovery, Data Center support
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
CI/CD tools Jenkins/Bamboo/Teamcity/CircleCI, DevSecOps Pipeline, Cloud Services (AWS/Azure/GCP), Ansible, Terraform, Docker, Helm, Cloud formation template, Webserver deployment & config, Databases(SQL/NoSQL) deployment & config, Git, Artifactory, Monitoring tools (Nagios, Grafana, Prometheus etc), Application logs (ELK/EFK, Splunk etc.), API Gateways, Security tools, Vault. |
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure
Job Description: |
This is for Product based organisation in Pune.
If you are looking for good opportunity in Cloud Development/Devops. Here is the right opportunity.
EXP: 4-10 YRs
Location:Pune
Job Type: Permanent
Minimum qualifications:
- Education: Bachelor-Master degree
- Proficient in English language.
Relevant experience:
- Should have been working for at least four years as a DevOps/Cloud Engineer
- Should have worked on AWS Cloud Environment in depth
- Should have been working in an Infrastructure as code environment or understands it very clearly.
- Has done Infrastructure coding using Cloudformation/Terraform and Configuration Management using Chef/Ansibleand Enterprise Bus(RabbitMQ/Kafka)
- Deep understanding of the microservice design and aware of centralized Caching(Redis), centralizedconfiguration(Consul/Zookeeper)
USA based product engineering company.Medical industry
Total Experience: 6 – 12 Years
Required Skills and Experience
- 3+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 3+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
Responsibilities
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
- Hands on experience in following is a must: Unix, Python and Shell Scripting.
- Hands on experience in creating infrastructure on cloud platform AWS is a must.
- Must have experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
- Must be good at these DevOps tools:
Version Control Tools: Git, CVS
Build Tools: Maven and Gradle
CI Tools: Jenkins
- Hands-on experience with Analytics tools, ELK stack.
- Knowledge of Java will be an advantage.
- Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
- Ability to help debug and optimise code and automate routine tasks.
- Should be extremely good in communication
- Experience in dealing with difficult situations and making decisions with a sense of urgency.
- Experience in Agile and Jira will be an add on
We are looking for a DevOps Architect who will be a part of a diverse software development team. The team will be responsible for building and maintaining multiple environments. He/ She will bring cloud management and sysadmin skills for application deployment and management while assisting in the automation of the end-to-end deployment process. He/ She should be passionate about trying hands on newer technologies and software methodologies. At the same time, he/she needs to have broad experience with build and deployment software and a level-headed approach to problem-solving.
Responsibilities:
• Execution of automation architecture, which involves service automation, automation of application build and monitoring services and databases
• Supporting various metrics and reporting requirements
• Implementing security features across automation pipeline including SSL certificates
• Maintaining access controls across all environments along with auditing any access breaches
• Providing day to day work allocation and strong leadership and team management
• Build and maintain infrastructure that utilizes public as well as on-prem cloud
• Collaborate with customers on the design of the end to end solution
• Lead deployment of projects and act as a point of escalation to help resolve any issues
• Act as a technical liaison between customers, engineers, and support
• Coach and mentor team members and guide on technical challenges
• Maintain technical skills and knowledge, keeping up to date with market trends and competitive insights.
Requirements:
• Good knowledge of AWS/Azure/Google Cloud IaaS (AMI, Pricing Model, VPC, Subnets, etc.) and AWS/Azure/Google Cloud Security best practices
• Strong working knowledge of infrastructure automation tools such as Ansible, Terraform, and Chef
• Manage the CI/CD Pipeline and help with release automation and deployment
• Experience with Docker and Kubernetes is mandatory, Docker Swarm/Container clustering will be a plus
• Write software and scripts to automate tasks, gather metrics
• Solid understanding, including advanced troubleshooting skills of a Linux distribution
• Knowledge of tools like Jenkins/TravisCI/Bamboo/GitLab CI is required
• Knowledge of SCM tools like Git/GitHub/Bitbucket/GitLab is required
• In-depth knowledge of databases such as MySQL, MongoDB, Elasticsearch, etc.
• Experience using monitoring tools like Prometheus, Grafana, CloudWatch, etc.
• Should have knowledge on writing scripts for automation using Shell/Python/Perl
• Good understanding of RESTful web services.
Good to have:
• Passion for writing great, simple, clean, and efficient code
• Should be a fast learner and have excellent problem-solving capabilities
• Should have excellent written and verbal communication skills
• Experience in working with large-scale distributed systems is a plus
• Should be able to independently design and build components for the automation platform
• Should assist in the maintenance of the tools and troubleshooting the issues
Why should you join Opcito?
We are a dynamic start-up that believes in designing transformation solutions for our customers with our ability in unifying quality, reliability, and cost-effectiveness at any scale. Our core work culture focuses on adding material value to client products by leveraging best practices in DevOps like continuous integration, continuous delivery, and automation, coupled with disruptive technologies like cloud, containers, serverless computing, and microservice-based architectures. Here are some of the perks of working with Opcito:
• Outstanding career development and learning opportunities
• Competitive compensation depending on experience and skill
• Friendly team and enjoyable working environment
• Flexible working schedule
• Corporate and social event.
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)
Skills required:
Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.
Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.
Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.
We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
- Health Memberships
- Sports Subscriptions
- Entertainment Subscriptions
- Key Conferences and Event Passes
- Learning Stipend
- Team Lunches and Parties
- Travel Reimbursements
- ESOPs
Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.
Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!