
About the Role:
We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.
Key Responsibilities:
- Manage and maintain AWS cloud infrastructure and services.
- Lead and support application migration projects from on-prem to cloud.
- Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
- Monitor cloud environments and optimize cost, performance, and reliability.
- Collaborate with development, operations, and security teams to implement DevOps best practices.
- Troubleshoot and resolve infrastructure and deployment issues.
Required Skills:
- 3–5 years of experience in AWS cloud environment.
- Proven experience with on-premises to cloud application migration.
- Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
- Solid scripting skills (Python, Bash, or similar).
Good to Have:
- Experience with Terraform for Infrastructure as Code.
- Familiarity with Kubernetes for container orchestration.
- Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.

Similar jobs
Job Summary
We are looking for a skilled AWS DevOps Engineer with 5+ years of experience to design, implement, and manage scalable, secure, and highly available cloud infrastructure. The ideal candidate should have strong expertise in CI/CD, automation, containerization, and cloud-native deployments on Amazon Web Services.
Key Responsibilities
- Design, build, and maintain scalable infrastructure on AWS
- Implement and manage CI/CD pipelines for automated build, test, and deployment
- Automate infrastructure using Infrastructure as Code (IaC) tools
- Monitor system performance, availability, and security
- Manage containerized applications and orchestration platforms
- Troubleshoot production issues and ensure high availability
- Collaborate with development teams for DevOps best practices
- Implement logging, monitoring, and alerting systems
Required Skills
- Strong hands-on experience with AWS services (EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch)
- Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI/CD
- Infrastructure as Code using Terraform / CloudFormation
- Containerization using Docker
- Container orchestration using Kubernetes / EKS
- Scripting knowledge in Python / Bash / Shell
- Experience with monitoring tools (CloudWatch, Prometheus, Grafana)
- Strong understanding of Linux systems and networking
- Experience with Git and version control
Good to Have
- Experience with configuration management tools (Ansible, Chef, Puppet)
- Knowledge of microservices architecture
- Experience with security best practices and DevSecOps
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience working in Agile/Scrum teams
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.
Section 2
- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.
Section 3
- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution
Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification
Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
LogiNext is looking for a technically savvy and passionate Principal DevOps Engineer or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations to automate daily tasks Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 8 to 10 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in query analysis, peformance tuning, database redesigning, Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Our Mission
Our mission is to help customers achieve their business objectives by providing innovative, best-in-class consulting, IT solutions and services and to make it a joy for all stakeholders to work with us. We function as a full stakeholder in business, offering a consulting-led approach with an integrated portfolio of technology-led solutions that encompass the entire Enterprise value chain.
Our Customer-centric Engagement Model defines how we engage with you, offering specialized services and solutions that meet the distinct needs of your business.
Our Culture
Culture forms the core of our foundation and our effort towards creating an engaging workplace has resulted in Infra360 Solution Pvt Ltd.
Our Tech-Stack:
- Azure DevOps, Azure Kubernetes Service, Docker, Active Directory (Microsoft Entra)
- Azure IAM and managed identity, Virtual network, VM Scale Set, App Service, Cosmos
- Azure, MySQL Scripting (PowerShell, Python, Bash),
- Azure Security, Security Documentation, Security Compliance,
- AKS, Blob Storage, Azure functions, Virtual Machines, Azure SQL
- AWS - IAM, EC2, EKS, Lambda, ECS, Route53, Cloud formation, Cloud front, S3
- GCP - GKE, Compute Engine, App Engine, SCC
- Kubernetes, Linux, Docker & Microservices Architecture
- Terraform & Terragrunt
- Jenkins & Argocd
- Ansible, Vault, Vagrant, SaltStack
- CloudFront, Apache, Nginx, Varnish, Akamai
- Mysql, Aurora, Postgres, AWS RedShift, MongoDB
- ElasticSearch, Redis, Aerospike, Memcache, Solr
- ELK, Fluentd, Elastic APM & Prometheus Grafana Stack
- Java (Spring/Hibernate/JPA/REST), Nodejs, Ruby, Rails, Erlang, Python
What does this role hold for you…??
- Infrastructure as a code (IaC)
- CI/CD and configuration management
- Managing Azure Active Directory (Entra)
- Keeping the cost of the infrastructure to the minimum
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environments infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Setting up the right set of security measures
Requirements
Apply if you have…
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience in Azure with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Understanding of Azure cloud computing services and cloud computing delivery models (IaaS, PaaS, and SaaS)
- Strong scripting or programming skills for automating tasks (PowerShell/Bash)
- Knowledge and experience with CI/CD tools: Azure DevOps, Jenkins, Gitlab etc.
- Knowledge and experience in IaC at least one (ARM Templates/ Terraform)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in a production environment and fixing them
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
- Experience in using Microsoft Azure Cloud services
If you are passionate about infrastructure, and cloud technologies, and want to contribute to innovative projects, we encourage you to apply. Infra360 offers a dynamic work environment and opportunities for professional growth.
Interview Process
Application Screening=>Test/Assessment=>2 Rounds of Tech Interview=>CEO Round=>Final Discussion

A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk • DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus
Location: Amravati, Maharashtra 444605 , INDIA
We are looking for a Kubernetes Cloud Engineer with experience in deployment and administration of Hyperledger Fabric blockchain applications. While you will work on Truscholar blockchain based platform (both Hyperledger Fabric and INDY versions), if you combine rich Kubernetes experience with strong DevOps skills, we will still be keen on talking to you.
Responsibilities
● Deploy Hyperledger Fabric (BEVEL SETUP) applications on Kubernetes
● Monitoring Kubernetes system
● Implement and improve monitoring and alerting
● Build and maintain highly available blockchain systems on Kubernetes
● Implement an auto-scaling system for our Kubernetes nodes
● Detail Design & Develop SSI & ZKP Solution
● - Act as a liaison between the Infra, Security, business & QA Teams for end to end integration and DevOps - Pipeline adoption.
Technical Skills
● Experience with AWS EKS Kubernetes Service, Container Instances, Container Registry and microservices (or similar experience on AZURE)
● Hands on with automation tools like Terraform, Ansible
● Ability to deploy Hyperledger Fabric in Kubernetes environment is highly desirable
● Hyperledger Fabric/INDY (or other blockchain) development, architecture, integration, application experience
● Distributed consensus systems such as Raft
● Continuous Integration and automation skills including GitLab Actions ● Microservices architectures, Cloud-Native architectures, Event-driven architectures, APIs, Domain Driven Design
● Being a Certified Hyperledger Fabric Administrator would be an added advantage
Sills Set
● Understanding of Blockchain NEtworks
● Docker Products
● Amazon Web Services (AWS)
● Go (Programming Language)
● Hyperledger Fabric/INDY
● Gitlab
● Kubernetes
● Smart Contracts
Who We are:
Truscholar is a state-of- art Digital Credential Issuance and Verification Platform running as blockchain Infrastructure as an Instance of Hyperledger Indy Framework. Our Solution helps all universities, Institutes, Edtech, E-learning Platforms, Professional Training Academics, Corporate Employee Training and Certifications and Event Management Organisations managing exhibitions, Trade Fairs, Sporting Events, seminars and webinars to their learners, employees or participants. The digital certificates, Badges, or transcripts generated are immutable, shareable and verifiable thereby building an individual's Knowledge Passport. Our Platform has been architected to function as a single Self Sovereign Identity Wallet for the next decade, keeping personal data privacy guidelines in min.
Why Now?
The Startup venture, which was conceived as an idea while two founders were pursuing a Blockchain Technology Management Course, has received tremendous applause and appreciation from mentors and investors, and has been able to roll out the product within a year and comfortably complete the product market fit stage. Truscholar has entered a growth stage, and is searching for young, creative, and bright individuals to join the team and make Truscholar a preferred global product within the next 36 months.
Our Work Culture:
With our innovation, open communication, agile thought process, and will to achieve, we are a very passionate group of individuals driving the company's growth. As a result of their commitment to the company's development narrative, we believe in offering a work environment with clear metrics to support workers' individual progress and networking within the fraternity.
Our Vision:
To become the intel inside the education world by powering all academic credentials across the globe and assisting students in charting their digital academic passports.
Advantage Location Amravati, Maharashtra, INDIA
Amid businesses in India realising the advantages of the work-from-home (WFH) concept in the backdrop of the Coronavirus pandemic, there has been a major shift of the workforce towards tier-2 cities.
Amravati, also called Ambanagri, is a city of immense cultural and religious importance and a beautiful Tier 2 City of Maharastra. It is also called the cultural capital of the Vidarbha region. The cost of living is less, the work-life balance is better, much breathable air, fewer traffic bottlenecks and housing remains affordable, as compared to congested and eccentric metro cities of India. We firmly believe that they (tier-2) are the future talent hubs and job-creation centres. Our conviction has been borne out by the fact that tier-2 cities have made great strides in salary levels due to a lot of investments in building excellent physical and social infrastructure.
Senior Devops Engineer
Who are we?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What do we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How do we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Introduction
When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.
We are quite keen to meet you if:
- You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
- You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
- You like experimenting, taking risks and thinking big.
3 things this position is NOT about:
- This is NOT just a job; this is a passionate hobby for the right kind.
- This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
- This is NOT a position for someone who likes to be told what needs to be done.
3 things this position IS about:
- Attention to detail matters.
- Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
- Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?
Roles and Responsibilities
This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.
Responsibilities:
- Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
- Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
- Develop tools and automation to make platform operations more efficient, reliable and reproducible
- Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
- Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
- Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
- Take initiatives to lead, drive and solve during challenging scenarios
Requirements:
- 3 + Years of experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems, RHEL/CentOS preferred
- Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
- Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
- Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
- Deep experience in customer facing roles with a proven track record of effective verbal and written communications
- Dependable and good team player
- Desire to learn and work with new technologies
Key Success Factors
- Are you
- Likely to forget to eat, drink or pee when you are coding?
- Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
- An open source enthusiast?
- Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
- Ability to think on your feet, and follow-up with multiple stakeholders to get things done
- Excellent interpersonal communication skills
- Superior project management and organizational skills
- Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
- Ability to anticipate potential problems, determine and implement solutions
- Energetic, disciplined, with a results-oriented approach
- Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
- Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
- You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast.










