
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch.
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude

Similar jobs
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting
Role & Responsiblities
- DevOps Engineer will be working with implementation and management of DevOps tools and technologies.
- Create and support advanced pipelines using Gitlab.
- Create and support advanced container and serverless environments.
- Deploy Cloud infrastructure using Terraform and cloud formation templates.
- Implement deployments to OpenShift Container Platform, Amazon ECS and EKS
- Troubleshoot containerized builds and deployments
- Implement processes and automations for migrating between OpenShift, AKS and EKS
- Implement CI/CD automations.
Required Skillsets
- 3-5 years of cloud-based architecture software engineering experience.
- Deep understanding of Kubernetes and its architecture.
- Mastery of cloud security engineering tools, techniques, and procedures.
- Experience with AWS services such as Amazon S3, EKS, ECS, DynamoDB, AWS Lambda, API Gateway, etc.
- Experience with designing and supporting infrastructure via Infrastructure-as-Code in AWS, via CDK, CloudFormation Templates, Terraform or other toolset.
- Experienced with tools like Jenkins, Github, Puppet or other similar toolset.
- Experienced with monitoring functions like cloudwatch, newrelic, graphana, splunk, etc,
- Excellence in verbal and written communication, and in working collaboratively with a variety of colleagues and clients in a remote development environment.
- Proven track record in cloud computing systems and enterprise architecture and security
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito
Role Summary
In this role, you will lead the DevOps function at the organisation and work through all
facets of software development: design, prototype, implementation, testing and
documentation.
Responsibilities
- Design, prototype, implement, test and troubleshoot deployment strategies
- BS in Computer Science, Mathematics, Engineering or equivalent. MS or higher degree
- preferred.
- Design continuous integration and delivery pipelines using latest tools and platformsv
- Demonstrated understanding and hands-on experience in Kubernetes, K8s-Ingress,
- Docker, AWS EC2, VPC, Route53, ELB, RDS, MongoDB, Helm, Rancher, Jenkins,
- VPN, GKE.
- Create high level deployment design (HLSD) documents and outline software solutions
- Strong DevOps Management experience with focus on server-side development and
- database design
- Optimize application for maximum speed and scalability
- Strong understanding of web technologies, web services, communication protocols,
- (REST, SOAP API’s) and a proven track record in developing communication between
- desktop applications and web services
- Knowledge of Agile software development methodologies
- Cooperating with the back-end developer in the process of building the RESTful API
Qualification
- Experience with working on code enhancements within a large, complex software
- system
- Create server side deployment primarily in the Cloud, Collaboration space
- 10+ years of relevant experience in developing and managing web/app products.
- Experience with a few of these languages: Shell, Ruby/ JRuby, Python, PowerShell, Java,
- Go
- Hands-on experience in deploying and managing highly scalable cloud applications
- using AWS,Google Cloud
- Assure that all user input is validated before submitting to back-end
- Basic understanding of cloud security and hands on experience in building secure
- systems.
- Prepare accurate implementation task lists / time estimates and deliver assignments as
- per functional specifications, quality standards and project schedulesvvvv
Roles and Responsibilities
● Managing Availability, Performance, Capacity of infrastructure and applications.
● Building and implementing observability for applications health/performance/capacity.
● Optimizing On-call rotations and processes.
● Documenting “tribal” knowledge.
● Managing Infra-platforms like
- Mesos/Kubernetes
- CICD
- Observability(Prometheus/New Relic/ELK)
- Cloud Platforms ( AWS/ Azure )
- Databases
- Data Platforms Infrastructure
● Providing help in onboarding new services with the production readiness review process.
● Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
● Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
● Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
● Identifying observability gaps in product services, infrastructure and working with stake owners to fix it.
● Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation.
● Managing/Automating upgrades of the infrastructure services.
● Automate toil work.
Experience & Skills
● 3+ Years of experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
● A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver.
● A deep understanding of computer science, software development, and networking principles.
● Demonstrated experience with languages, such as Python, Java, Golang etc.
● Extensive experience with Linux administration and good understanding of the various linux kernel subsystems (memory, storage, network etc).
● Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
● Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
● Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud.
● Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc.
● Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
● Experience with multiple datastores is a plus (MySQL, PostgreSQL, Aerospike,
Couchbase, Scylla, Cassandra, Elasticsearch).
● Experience with data platforms tech stacks like Hadoop, Hive, Presto etc is a plus
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
- 2+ years of demonstrable experience leading site reliability and performance in large-scale, high-traffic environments
- 2+ years of hands-on experience as a DevOps engineer
- Strong leadership, communication and interpersonal skills geared to getting things done
- Developing themselves and the talent within their charge – fostering and creating opportunity for the team
- Strong understanding of SRE concepts and the DevOps culture. Set the direction and strategy for your team, and help shape the overall SRE program for the company
- Be able to lead complicated technical issues and communicating status updates/RCA with management and customers.
- Own site stability, performance, capacity planning, DevOps recruitment.
2. Extensive expertise in the below in AWS Development.
3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation.
4. Lambda, Kinesis. CodeCommit ,CodePipeline.
5. Leveraging AWS SDKs to interact with AWS services from the application.
6. Writing code that optimizes performance of AWS services used by the application.
7. Developing with Restful API interfaces.
8. Code-level application security (IAM roles, credentials, encryption, etc.).
9. Programming Language Python or .NET. Programming with AWS APIs.
10. General troubleshooting and debugging.
What we are looking for
Work closely with product & engineering groups to identify and document
infrastructure requirements.
Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
Implement infrastructure including network connectivity, virtual machines
and monitoring.
Implement and follow security guidelines, both policy and technical to
protect our customers.
Resolve incidents as escalated from monitoring solutions and lower tiers.
Identify root cause for issues and develop long term solutions to fix recurring
issues.
Ability to automate recurring tasks to increase velocity and quality.
Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
Master’s/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
5+ years overall experience.
Work experience must have included:
Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
Minimum of 4 years’ experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
Minimum 3 years of experience working in public clouds like AWS & Azure.
Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
Strong in performing production operation activities.
Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
Good at source code management tools like Bitbucket, GIT.
Configuring and utilizing monitoring and alerting tools.
Scripting to automate infrastructure and operational processes.
Hands on work to secure networks and systems.
Sound problem resolution, judgment, negotiating and decision making skills
Ability to manage and deliver multiple project phases at the same time
Strong analytical and organizational skills
Excellent written and verbal communication skills
Interview focus areas
Networks, systems, monitoring
AWS (EC2, S3, VPC)
Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
Culture fit, agility, bias for action, ownership, communication

