Company Profile:Flentas helps Startups, SMEs & Enterprises who want to leverage full potential of Cloud by making their journey to Cloud a successful one. As an organization, Flentas is focused on Cloud Consulting, IoT, DevOps practice and implementation, Cloud Governance Automation, Load/Performance Testing and tuning of high Traffic / High Volume Cloud applications. Flentas serves clients globally of all shapes and sizes with a strong and passionate team of experienced Solution Architects and Technology Enthusiasts. Job Brief: We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or onpremise and cloud platform. Job Location: Satara Road, Pune. Job Description: • Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation. • Experience interacting with customer. • Excellent communication. • Hands-on in creating and managing Jenkins job, Groovy scripting. • Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines. • Experience in Maven. • Experience in scripting languages like Bash, Powershell, Python.• Experience in automation tools like Terraform, Ansible, Chef, Puppet.• Excellent troubleshooting skills. • Experience in Docker and Kuberneties with creating docker files.• Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
What you will do• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment• Design and implement monitoring solutions that identify both system bottlenecks and production issues• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and qualityo Automating Infra creationo Provide easy to use solutions to engineering team• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practiceso Update our processes and design new processes as needed.o Establish DevOps Engineer team best practices.o Stay current with industry trends and source new ways for our business to improve.• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.• Manage timely resolution of all critical and/or complex problems• Maintain, monitor, and establish best practices for containerized environments.• Mentor new DevOps engineersWhat you will bring• The desire to work in fast-paced environment.• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers• Containerization experience with applications deployed on Docker and Kubernetes• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendationso AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as neededGood to have• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure• Good knowledge of REST/SOAP/JSON web service API implementation•
Work with a leading SaaS product and services company, learn about the global processes and client requirements. Our client is a cloud integration and automation products provider. Their customised applications allow their clients' platform to connect to any cloud thus enhancing the user experience and allowing a seamless integration of data. Their streamlined operations focus on strategic issues like secure infrastructure and an in-house UX that is 5 times faster and at a fraction of the cost. The founder is a Berkeley alumnus, with a background in Technology and Business and has packed in many years with IT and Fintech companies. Based out of California with an office in Mumbai, the 4 year old company is a niche player growing at arate of 23% in an industry that is in a booming and blooming stage. Their products being used in industries like Retail, Ecomm, Manufacturing, F&B, Pharma, Education among others. As a Tech Lead, you will be responsible for writing unit and functional tests to ensure code coverage and enable automated testing of features. What you will do: Designing and deploying database Ensuring the entire stack is designed and built for speed and scalability Designing and constructing REST API Mentoring other developers of the team with code and design reviews What you need to have: Strong proficiency Primary Stack (Golang, Node.Js, Express, ES6, Docker, AWS, PHP, Laravel, Microservices, Rest APIs) Strong proficiency in Database tools (MongoDB, Mongoose, MySQL, Postgres, Eloquent, Sequelize, DynamoDB, Lucid Models, PDO, Redis, Memcached, GraphQL) Experience implementing testing platforms and unit tests Proficiency with Git Proficiency in tools (Ajax, Axios, TDD, OOP, MVC, jQuery, npm, Webpack, Guzzle, Git, HTML, CSS, Linux, Kubernetes,SVN, Blade, Ubuntu, PHPunit, jest, JIRA) Strong proficiency in AWS, or similar environments (Microservices, Docker, AWS, Lambda, S3 bucket, SQS).
You will work on: Your primary work involves developing and maintaining tools for build, release, deployment, monitoring and operations both on cloud as well as on-premises infrastructure. You are required to work closely with Developers and Cloud Architects and own infrastructure automation, CI/CD processes and support operations. What you will do (Responsibilities): Day-to-day operational support of CI/CD infrastructure relied upon by teams deploying software to the cloud or on-premise Write Code to develop deployment of various services to private or public cloud/on-premise environments. Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings. Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues. On call responsibilities to respond to emergency situations and scheduled maintenance. Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration What you bring (Skills): Strong Linux System skills Scripting in bash, python Basic file handling & networking Comfortable in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis Great if you know (Skills): Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems Exposure to deploying and automating on any public cloud – GCP, Azure or AWS Private cloud experience – VMWare or OpenStack Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters Containerized applications – Docker based image builds and maintainenace. Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
You will work on: You will be working on some of our clients massive scale Infrastructure and DevOps requirements - designing for microservices and large scale data analytics. You will be working on enterprise scale problems - but will be part of our agile team that delivers like a startup. You will have opportunity to be part of team that's building and managing large private cloud. What you will do (Responsibilities): Work on cloud marketplace enablements for some of our clients products Write Kubernetes Operators to automate custom PaaS solutions Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings. Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues. On call responsibilities to respond to emergency situations and scheduled maintenance. Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration What you bring (Skills): Experience with administering of and debugging on Linux based systems with programming skills in Scripting, Golang, Python among others Expertise in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit Comfortable with DevOps for Big Data databases like Terradata, Netezza, Hadoop based ecosystems, BigQuery, RedShift among others Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis Great if you know (Skills): Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems Exposure to deploying and automating on any public cloud – GCP, Azure or AWS Private cloud experience – VMWare or OpenStack Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters Containerized applications – Docker based image builds and maintainenace. Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Position: Technology Lead - DevOpsPosition Summary:Technology Lead provides technical leadership with in-depth DevOps experience and is responsible for enabling delivery of high-quality projects to Saviant clients through highly effective DevOps process. This is a highly technical role, with a focus on analysing, designing, documenting, and implementing a complete DevOps process for enterprise applications using the most advanced technology stacks, methodologies, and best practices within the agreed timelines.Individuals in this role will need to have good technical and communication skills and strive to be on the cutting edge, innovate, and explore to deliver quality solutions to Saviant Clients.Your Role & Responsibilities at Saviant:• Design, analyze, document, and develop the technical architecture for on-premise as well as cloud-based DevOps solutions around customers’ business problems.• Lead end to end process and setup implementation of configuration management, CI, CD, and monitoring platforms.• Conduct reviews of design and implementation of DevOps processes while establishing, and maintaining best practices• Setup new processes to improve the quality of development, delivery and deployment processes• Provide technical support and guidance to project team members.• Upgrade by learning technologies beyond traditional area of expertise• Contribute to pre-sales, proposal creation, POCs, technology incubation from technical and architecture perspective• Participate in recruitment and people development initiatives.Job Requirements/Qualifications:• Educational Qualification: BE, BTech, MTech, MCA from a reputed institute • 6 to 8 years of hands-on experience of the DevOps process using technologies like Dot Net Core, Python, C#, MVC, ReactJS, Python, Android, IOS, Linux, Windows• Strong hands-on experience of the full life cycle of DevOps: DevOps Orchestration/Configuration/Security/CI-CD/Release Management and Environment management • Solid hands-on knowledge of DevOps technologies and tools such as Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet, JIRA, TFS, Git, SVN, various scripting tools, etc. • Solid hands-on knowledge of containerization technologies and tools such as Docker, Kubernetes, Cloud Foundry • In-depth understanding of various development and deployment architectures from a DevOps perspective• Expertise in Grounds-up DevOps projects involving multiple agile teams spread across geographies.• Experience in a various Agile Project Management software /techniques / tools• Strong analytical and problem solving skills• Excellent written and oral communication skills• Enjoys working as part of agile software teams in a startup environment.Who Should Apply?• You have independently managed end-to-end DevOps projects, including understanding requirements, design solutions and implementing, setting up best practices with different business domain over last 2 years.• You are well versed with Agile development methodologies and have successfully implemented them across at least 2-3 projects• You have lead development team of 5 to 8 developers with Technology responsibility• You have served as “Single Point of Contact” for managing technical escalations and decisions
What we need We are looking for a strong DevOps candidate with good backend experience to join our team. We create new technology every day, new ways to do things, new ways to connect, collect and present data, this person will be responsible for making sure that via technology we deliver with high availability & scalability. You are expected to have at least 2 years of professional experience in the DevOps field. Responsibilities Contribute to the design and architecture of the product. Own the product infrastructure over public and private clouds. Work closely with the team and set up the necessary automation process to help the team deliver multiple production releases per day. Own the release processes and make sure that the release processes are as smooth as possible. Make sure that the whole infrastructure is automated and coded. This role should make sure that the infrastructure can be re created in least time as possible. Maintain production environment stability, high availability and scalability. Develop backend Apis as per the products requirement. Skills Desired Excellent command in the area of DevOps, AWS/Azure Cloud, CI/CD pipeline, Docker and Kubernetes, Jenkins, GIT, GitHub. Experience in network design, infrastructure design and automation. Experience on designing secure HA, DR, Backup and retention architectures using AWS/Azure cloud platform Proficient with different Docker components Docker Engine, Docker image mgmt, Compose and Docker Registry and has worked kubernetes (Cloud and On Premises). Expertise in Linux, Python, Perl and Shell scripting. Administration experience on Various Linux flavors including Centos, RHEL, Debian. Experience on Environment Provisioning, Administration and Monitoring on AWS/Azure Cloud. Expertise in branching, tagging and maintaining the versions across the environments using SCM tools like GitHub Experience in log management, analysis and alerting approaches. Ability to work within a team and independently Must have strong analytical and creative problem-solving skills. Experience in Nodejs, Dot Net Framework and Dot net core for back end development. Additional Skills Desired Experience in the E-commerce or Travel or Manufacturing domain is an advantage. Prior experience of working with a globally distributed team is a plus. Comfortable working in an Agile environment B.E./B.Tech in Computer Science, Industrial/Electronic Engineering Experienced working on Windows and Linux and/or Unix Customer Facing experience and Customer Experience in mind
Our current operations research product is deployed at some of the largest organizations in the world. This role will be responsible for re-architecting the existing solution and adding components of machine learning and intelligence to its logic. We are looking for a passionate individual who loves technology and is willing to design and create a flexible long-lasting product architecture.
What are we looking for?? You are an engineer with an eye for constant improvement. You not only look at improving the code but also tooling, the commands you use, the user-facing documentation and everything that makes great and beautiful products possible. You can talk fluently to computers: It does not matter if it is Python, Go, Java, NodeJS or any other widely used and known programming language. Till you know one and you know it well, you fit right in. You believe languages are just tools to solve problems. You really like to solve problems, complex engineering problems! You have some exposure/understanding of systems. Out of curiosity, you tried to understand the routing, load balancing in a web server or how the Linux filesystem was built and structured the way it is. You may not have worked extensively but you definitely have dabbled with it and can think as well as understand how systems interact with each other. You can express ideas and your opinions and aren’t shy to say no if you don’t know something. We are not hiring Wikipedia after all, are we? What you will be learning and doing? You will be part of a team, building a product to solve the next generation of problems in the programmable infrastructure. It’s you who will start with defining the feature and how it will make the life of the end user better and then make it into a reality. You will most likely be programming in Go or Python (not to worry if you have not used them before, some of our best engineers started fresh on these languages after they joined us.) Most likely some part of your work might be open source and worthy of talking and presenting at conferences and meetups. You will be working with cloud native technologies such as Docker, Kubernetes, Prometheus, Service Meshes, Distributed Tracing in some shape or form. You will also be working with one or more public cloud platform from AWS/Azure/Google Cloud Platform (again, you may not know any or some of these technologies and that is not a deal breaker) Your workflow will be driven by tools such as GitHub, Slack and a lot of asynchronous communication with distributed teams. “GitHub issues” will be your new re-incarnated Jira ;) We think InfraCloud is a rocketship you should join! InfraCloud has been working in cloud native technologies with early innovators before Kubernetes was 1.0 and when it seemed like Mesos will become the standard! Our focus & history on an area of programmable infrastructure coupled with working with some innovative product companies give us some solid engineering challenges to work on. From one of our hackathons was born BotKube Project (https://github.com/infracloudio/botkube) which was developed by our engineers and community over the last 1.5 years. When we started developing BotKube’s Microsoft Teams integration, another project was born - Go SDK for Teams (https://github.com/infracloudio/msbotbuilder-go). We are also the second largest contributor as a company to Fission - a Serverless framework for Kubernetes (http://github.com/fission/fission). Another time an engineer working with a telecom company added support for 128bit tracing ID in Jaeger client. These are just some examples - and there are many more - do make a point to ask the engineers you talk to about more open source work we do. Our engineers are co-organizers of Kubernetes Pune, Docker Pune, PythonPune and can be found frequently speaking at local meetups and conferences.
What are we looking for?? You are an engineer with an eye for constant improvement. You not only look at improving the code but also tooling, the commands you use, the user facing documentation and everything that makes great and beautiful products possible. You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12 factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community. You will be hands on contributor but you also love to scale people in your team by guiding and building them. You can talk fluently to computers: It does not matter if it is Python, Go, Java, NodeJS or any other widely used and known programming language. Till you know one and you know it well, you fit right in. You believe languages are just tools to solve problems. You have some exposure/understanding of systems. You have worked on some systems and are curious about the once you have not worked on. You can express ideas and your opinions and don’t shy to say no if you don’t know something. We are not hiring Wikipedia after all, are we? What you will be learning and doing? You will be working on cloud native technologies such as Kubernetes, Prometheus, Service Meshes like LinkerD, Istio etc. You will most likely be programming in Go or Python (not to worry if you have not used them before, some of our best engineers started fresh on these languages after they joined us.) You understand the needs of customers and can translate them into solutions that work and scale using open source cloud native technologies. You manage a team of technical engineers but you also love to get your hands dirty with a new tool or a new framework from time to time. You build your own perspective and viewpoint on things because you don’t believe ivory tower architects are effective. You will potentially contribute to open source projects as part of your work and work on applying those technologies in the context of a customer problem.
Software Engineer in Test/Test Engineer work within software development agile teams to ensure software is designed and implemented for testability. They write automated unit, component and system tests to ensure code quality and detect regressions early in the development cycle. They are responsible for accurate test execution documentation. They also maintain the integration and test frameworks used by multiple development teams. They must be able to support a fast-paced agile software release process for one of Convergent’s cybersecurity customers. Job Duties Develop software to perform unit, component and system testing Develop and maintain test execution and tracking software Contribute towards architecture designs providing feedback on testability Decompose user stories into tasks and estimate story points for user stories and tasks Perform manual tests as required Experience and Skills: Required Skills o Experience in test design and implementation o Experience in release management o Experience with Python o Experience with Docker o Experience with Linux systems (CentOS and RHEL preferred) o Experience with automation tools and frameworks (unittest, Jenkins, Selenium) o Experience with Linux BASH scripting and administration o Works and communicates well in a distributed/remote team environment Desired Skills o Experience with Git and Ansible o Experience with RedHat Package Manager (RPM) and yum repositories o Experience testing applications and deploying them to cloud micro service architectures (AWS, OpenShift, Azure, etc.) o Experience with Django based full stack web development o Experience testing web application frontends (React preferred) o Familiarity with KVM/ESX virtual environments o BS in Computer Science, Engineering or a related field (3 years work experience in place of degree) If you are interested for above position, then please send me your updated cv & following details: Current CTC: Expected CTC: Notice period: Can relocate to Baner Pune (Y / N) : Experience with automation tools and frameworks (unittest, Jenkins, Selenium) in years: Experience in years in Python Scripting: Experience with Linux systems (CentOS and RHEL preferred) in years: Experience in Docker in years:
What we look for? You have worked with programmable infrastructure in some way - building a CI/CD pipeline or provisioning infrastructure with programs or provisioning monitoring and logging infrastructure for large sets of machines. You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to setup his Ubuntu machine and runs a playbook everytime he has to install something :) You don’t throw around words such as “high availability” or “resilient systems” without understanding them at least at a basic level. Because you know that the words are easy to talk but there is a fair amount of work to build such a system in practice. You love coaching people - about the 12 factor apps or the latest tool that reduced your time of doing a task by X times and so on. You know that DevOps - is meant to enable Developers do things better and faster! You understand the areas you have worked on very well but you are curious about many systems that you may not have worked on and want to fiddle with them. You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things. What you will learn and do? You will work with customers trying to transform their applications and adopting the cloud native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies. The problems are and solutions are continuously evolving in the space but fundamentally you will solve problems with simplest and scalable automation. You will build open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they? You will hack around open source projects, understand the capabilities and limitations and apply the right tool for the right job. You will educate the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud native infrastructure. We think InfraCloud is a rocketship you should join! InfraCloud has been working in cloud native technologies with early innovators before Kubernetes was 1.0 and when it seemed like Mesos will become the standard! Our focus & history on area of programmable infrastructure coupled with working with some innovative product companies gives us some solid engineering challenges to work on. From one of our hackathons was born BotKube Project (https://github.com/infracloudio/botkube) which was developed by our engineers and community over the last 1.5 years. When we started developing BotKube’s Microsoft Teams integration, another project was born - Go SDK for Teams (https://github.com/infracloudio/msbotbuilder-go). We are also the second largest contributor as a company to Fission - a Serverless framework for Kubernetes (http://github.com/fission/fission). Another time an engineer working with a telecom company added support for 128bit tracing ID in Jaeger. These are just some examples - and there are many more - do make a point to ask the engineers you talk to about more open source work we do. Our engineers are co-organizers of Kubernetes Pune, Docker Pune, Python Pune and can be found frequently speaking at local meetups and conferences.
1. Should have been working for at least 3 years as a DevOps/Cloud Engineer in an AWS Cloud Environment .2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)4. Hands-on experience of working on containers and its orchestration using Kubernetes5. Hands-on experience of Linux and Windows Operating System6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo orCouchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices
Job Description:We are looking for a DevOps Architect who will be a part of a diverse software development team. The team will be responsible for building and maintaining multiple environments. He/ She will bring cloud management and sysadmin skills for application deployment and management while assisting in the automation of the end-to-end deployment process. He/ She should be passionate about trying hands on newer technologies and software methodologies. At the same time, he/she needs to have broad experience with build and deployment software and a level-headed approach to problem-solving.Responsibilities:• Execution of automation architecture, which involves service automation, automation of application build and monitoring services and databases• Supporting various metrics and reporting requirements• Implementing security features across automation pipeline including SSL certificates• Maintaining access controls across all environments along with auditing any access breaches• Providing day to day work allocation and strong leadership and team management• Build and maintain infrastructure that utilizes public as well as on-prem cloud • Collaborate with customers on the design of the end to end solution• Lead deployment of projects and act as a point of escalation to help resolve any issues• Act as a technical liaison between customers, engineers, and support• Coach and mentor team members and guide on technical challenges• Maintain technical skills and knowledge, keeping up to date with market trends and competitive insights.Requirements:• Good knowledge of AWS/Azure/Google Cloud IaaS (AMI, Pricing Model, VPC, Subnets, etc.) and AWS/Azure/Google Cloud Security best practices• Strong working knowledge of infrastructure automation tools such as Ansible, Terraform, and Chef • Manage the CI/CD Pipeline and help with release automation and deployment• Experience with Docker and Kubernetes is mandatory, Docker Swarm/Container clustering will be a plus• Write software and scripts to automate tasks, gather metrics• Solid understanding, including advanced troubleshooting skills of a Linux distribution• Knowledge of tools like Jenkins/TravisCI/Bamboo/GitLab CI is required• Knowledge of SCM tools like Git/GitHub/Bitbucket/GitLab is required • In-depth knowledge of databases such as MySQL, MongoDB, Elasticsearch, etc. • Experience using monitoring tools like Prometheus, Grafana, CloudWatch, etc.• Should have knowledge on writing scripts for automation using Shell/Python/Perl• Good understanding of RESTful web services. Good to have:• Passion for writing great, simple, clean, and efficient code• Should be a fast learner and have excellent problem-solving capabilities• Should have excellent written and verbal communication skills• Experience in working with large-scale distributed systems is a plus• Should be able to independently design and build components for the automation platform• Should assist in the maintenance of the tools and troubleshooting the issues Why should you join Opcito?We are a dynamic start-up that believes in designing transformation solutions for our customers with our ability in unifying quality, reliability, and cost-effectiveness at any scale. Our core work culture focuses on adding material value to client products by leveraging best practices in DevOps like continuous integration, continuous delivery, and automation, coupled with disruptive technologies like cloud, containers, serverless computing, and microservice-based architectures. Here are some of the perks of working with Opcito:• Outstanding career development and learning opportunities• Competitive compensation depending on experience and skill• Friendly team and enjoyable working environment• Flexible working schedule• Corporate and social event.