
Experience:8+ Years
AWS Certification must.
Location:Pan india

Similar jobs
Please Apply - https://zrec.in/RZ7zE?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer GCP
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in GCP, AWS or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of GCP.
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Who you are
The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.
Solving hard governance and societal challenges
Work directly with central and state government leaders and other dignitaries
Mentorship from world class people and rich ecosystems
Position : Architect or Technical Lead - DevOps
Location : Bangalore
Role:
Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.
Clear understanding of core cloud platform technologies across public and private clouds.
Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.
Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.
Drive a culture of automation, and self-service enablement for developers.
Work with information security leadership and technical team to automate security into the platform and services.
Meet infrastructure SLAs and compliance control points of cloud platforms.
Define and contribute to initiatives to continually improve Solution Delivery processes.
Improve organisation’s capability to build, deliver and scale software as a service on the cloud
Interface with Engineering, Product and Technical Architecture group to meet joint objectives.
You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.
You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.
You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).
You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.
You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.
You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.
You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).
You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.
Job Description: DevOps Engineer
About Hyno:
Hyno Technologies is a unique blend of top-notch designers and world-class developers for new-age product development. Within the last 2 years we have collaborated with 32 young startups from India, US and EU to to find the optimum solution to their complex business problems. We have helped them to address the issues of scalability and optimisation through the use of technology and minimal cost. To us any new challenge is an opportunity.
As part of Hyno’s expansion plans,Hyno, in partnership with Sparity, is seeking an experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in enhancing our software development processes, optimising system infrastructure, and ensuring the seamless deployment of applications. If you are passionate about leveraging cutting-edge technologies to drive efficiency, reliability, and scalability in software development, this is the perfect opportunity for you.
Position: DevOps Engineer
Experience: 5-7 years
Responsibilities:
- Collaborate with cross-functional teams to design, develop, and implement CI/CD pipelines for automated application deployment, testing, and monitoring.
- Manage and maintain cloud infrastructure using tools like AWS, Azure, or GCP, ensuring scalability, security, and high availability.
- Develop and implement infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate the provisioning and management of resources.
- Constantly evaluate continuous integration and continuous deployment solutions as the industry evolves, and develop standardised best practices.
- Work closely with development teams to provide support and guidance in building applications with a focus on scalability, reliability, and security.
- Perform regular security assessments and implement best practices for securing the entire development and deployment pipeline.
- Troubleshoot and resolve issues related to infrastructure, deployment, and application performance in a timely manner.
- Follow regulatory and ISO 13485 requirements.
- Stay updated with industry trends and emerging technologies in the DevOps and cloud space, and proactively suggest improvements to current processes.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent work experience).
- Minimum of 5 years of hands-on experience in DevOps, system administration, or related roles.
- Solid understanding of containerization technologies (Docker, Kubernetes) and orchestration tools
- Strong experience with cloud platforms such as AWS, Azure, or GCP, including services like ECS, S3, RDS, and more.
- Proficiency in at least one programming/scripting language such as Python, Bash, or PowerShell..
- Demonstrated experience in building and maintaining CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Familiarity with configuration management tools like Ansible, Puppet, or Chef.
- Experience with container (Docker, ECS, EKS), serverless (Lambda), and Virtual Machine (VMware, KVM) architectures.
- Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or Pulumi.
- Strong knowledge of monitoring and logging tools such as Prometheus, ELK stack, or Splunk.
- Excellent problem-solving skills and the ability to work effectively in a fast-paced, collaborative environment.
- Strong communication skills and the ability to work independently as well as in a team.
Nice to Have:
- Relevant certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, Certified Kubernetes Administrator (CKA), etc.
- Experience with microservices architecture and serverless computing.
Soft Skills:
- Excellent written and verbal communication skills.
- Ability to manage conflict effectively.
- Ability to adapt and be productive in a dynamic environment.
- Strong communication and collaboration skills supporting multiple stakeholders and business operations.
- Self-starter, self-managed, and a team player.
Join us in shaping the future of DevOps at Hyno in collaboration with Sparity. If you are a highly motivated and skilled DevOps Engineer, eager to make an impact in a remote setting, we'd love to hear from you.
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
We are a self organized engineering team with a passion for programming and solving business problems for our customers. We are looking to expand our team capabilities on the DevOps front and are on a lookout for 4 DevOps professionals having relevant hands on technical experience of 4-8 years.
We encourage our team to continuously learn new technologies and apply the learnings in the day to day work even if the new technologies are not adopted. We strive to continuously improve our DevOps practices and expertise to form a solid backbone for the product, customer relationships and sales teams which enables them to add new customers every week to our financing network.
As a DevOps Engineer, you :
- Will work collaboratively with the engineering and customer support teams to deploy and operate our systems.
- Build and maintain tools for deployment, monitoring and operations.
- Help automate and streamline our operations and processes.
- Troubleshoot and resolve issues in our test and production environments.
- Take control of various mandates and change management processes to ensure compliance for various certifications (PCI and ISO 27001 in particular)
- Monitor and optimize the usage of various cloud services.
- Setup and enforce CI/CD processes and practices
Skills required :
- Strong experience with AWS services (EC2, ECS, ELB, S3, SES, to name a few)
- Strong background in Linux/Unix administration and hardening
- Experience with automation using Ansible, Terraform or equivalent
- Experience with continuous integration and continuous deployment tools (Jenkins)
- Experience with container related technologies (docker, lxc, rkt, docker swarm, kubernetes)
- Working understanding of code and script (Python, Perl, Ruby, Java)
- Working understanding of SQL and databases
- Working understanding of version control system (GIT is preferred)
- Managing IT operations, setting up best practices and tuning them from time-totime.
- Ensuring that process overheads do not reduce the productivity and effectiveness of small team. - Willingness to explore and learn new technologies and continuously refactor thetools and processes.
- Experience using AWS (that’s just common sense)
- Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, and S3
- Experience building and maintaining cloud-native applications
- A solid background in Linux/Unix and Windows server system administration
- Experience using https://www.simplilearn.com/tutorials/devops-tutorial/devops-tools" target="_blank">DevOps tools in a cloud environment, such as Ansible, Artifactory, https://www.simplilearn.com/tutorials/docker-tutorial/what-is-docker-container" target="_blank">Docker, GitHub, https://www.simplilearn.com/tutorials/jenkins-tutorial/what-is-jenkins" target="_blank">Jenkins, https://www.simplilearn.com/tutorials/kubernetes-tutorial/what-is-kubernetes" target="_blank">Kubernetes, Maven, and Sonar Qube
- Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic
- Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus
- An understanding of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform
- Knowledge of one or more of the most-used programming languages available for today’s cloud computing (i.e., SQL data, XML data, R math, Clojure math, Haskell functional, Erlang functional, Python procedural, and Go procedural languages)
- Experience in troubleshooting distributed systems
- Proficiency in script development and scripting languages
- The ability to be a team player
- The ability and skill to train other people in procedural and technical topics
- Strong communication and collaboration skills
As a special aside, an AWS engineer who works in DevOps should also have experience with:
- The theory, concepts, and real-world application of Continuous Delivery (CD), which requires familiarity with tools like AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline
- An understanding of automation
- Good experience in AWS services like Elastic Compute Cloud(EC2), IAM, RDS, API Gateway, Cognito, etc.
- Using GIT, SonarQube, Ansible, Nexus, Nagios, etc.
- Strong experience in creating, importing and launching volumes with security groups, auto-scaling, Load Balancers, Fault-tolerant
- Experience in configuring Jenkins job with related Plugins for Building, Testing, and Continuous Deployment to accomplish the complete CI/CD.
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
We are having an excellent job opportunity for the position for AWS Infra Architect for one of the reputed Multinational Company at Hyderabad.
Mandate Skills : Please find the below expectations
- We need at-least 3+ years of experience as an Architect in AWS Primary Skills
- Designing, Planning, Implementation , Providing the solutions in Designing the Architecture
- Automation Using Terraform / Powershell /Python
- Should have good experience in Cloud formation Templates
- Experience in Cloudwatch
- Security in AWS
- Strong Linux Administration skills
1. Developing a video player website where students can learn various courses, view e-books, solve tests, etc.
2. Building the product to reach higher scalability
3. Developing software to integrate with internal back-end systems
4. Working on AWS cloud platform
5. Working on Amazon Ec2, Amazon S3 bucket, and Git
6. Working on the implementation of continuous integration and deployment pipelines using Jenkins (mandatory)
7. Monitoring, troubleshooting, and diagnosing infrastructure systems (excellent knowledge required for the same)
8. Building tools to reduce the occurrences of errors and improve customer experience
9. Should have experience in MERN Stack too.

