
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.

About samco securities limited
About
SAMCO is a low-cost stock broker that is technology-driven and has memberships in all of India's main stock and commodities exchanges. On the various platforms offered by SAMCO, users may place orders to buy or sell securities, and SAMCO will do all in its power to ensure that their transactions are carried out successfully.
SAMCO is built on the fundamental idea of using technology to radically improve how clients conduct their business. They get rid of the unnecessary extras that traditional brokers now make use of, and as a result, they reduce costs. These cost savings are given back to the company's customers in the form of predetermined reductions in the brokerage fees they pay. Because of the way that they think, the firm can avoid charging users on a percent volume basis, and they can also avoid imposing turnover or other monthly commitments.
Connect with the team
Similar jobs
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Please Apply - https://zrec.in/L51Qf?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer (Infrastructure/SRE)
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.
This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
- Experience in troubleshooting production incidents and handling high-pressure situations.
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
- Strong with at least one public cloud AWS/GCP/Azure
- Strong with Cost Optimization and Security Best practices
- Strong with Infrastructure automation using Terraform and CI/CD automation
- Strong with Configuration Management using Ansible, Helm etc
- Good with designing architectures (Containers, Serverless, VMs etc)
- Hands-on Experience working on Multiple Projects
- Strong with Client communication and requirements gathering
- Databases management experience
- Good experience with Prometheus, Grafana & Alert Manager
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
- Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
- Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.
Good to have
- Multi-cloud experience with AWS, GCP & Azure
- Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
- Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
- Design cloud infrastructure that is secure, scalable, and highly available on AWS, Azure and GCP
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS, Azure, GCP cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS, Azure Infrastructure and systems
- Perform infrastructure cost analysis and optimization
Skills We Require:- Dev Ops, AWS Admin, terraform, Infrastructure as a Code
SUMMARY:-
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Have good hands on experience on Dev Ops, AWS Admin, terraform, Infrastructure as a Code
Have knowledge on EC2, Lambda, S3, ELB, VPC, IAM, Cloud Watch, Centos, Server Hardening
Ability to understand business requirements and translate them into technical requirements
A knack for benchmarking and optimization
Who you are
The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.
Solving hard governance and societal challenges
Work directly with central and state government leaders and other dignitaries
Mentorship from world class people and rich ecosystems
Position : Architect or Technical Lead - DevOps
Location : Bangalore
Role:
Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.
Clear understanding of core cloud platform technologies across public and private clouds.
Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.
Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.
Drive a culture of automation, and self-service enablement for developers.
Work with information security leadership and technical team to automate security into the platform and services.
Meet infrastructure SLAs and compliance control points of cloud platforms.
Define and contribute to initiatives to continually improve Solution Delivery processes.
Improve organisation’s capability to build, deliver and scale software as a service on the cloud
Interface with Engineering, Product and Technical Architecture group to meet joint objectives.
You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.
You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.
You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).
You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.
You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.
You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.
You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).
You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.
LogiNext is looking for a technically savvy and passionate DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 2 to 4 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Strong background in Linux/Unix Administration and Python/Shell Scripting Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Understanding of any scripting programming language.
Configuration and managing databases such as MySQL
Working knowledge of various tools, open-source technologies, and cloud services (AWS)
Implementing automation tools(Ansible, Jenkins) for deployment and provisioning IT infrastructure
Excellent troubleshooting of cloud systems.
Awareness of critical concepts in DevOps principles.
Your Role:
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services
- Gain deep knowledge of our complex applications
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment
- Should be able to lead a team of smart engineers
- Should be able to strategically guide the team to greater automation adoption
Must Have:
- Experience Building/managing DevOps/SRE teams
- Strong in troubleshooting/debugging Systems, Network and Applications
- Strong in Unix/Linux operating systems and Networking
- Working knowledge of Open source technologies in Monitoring, Deployment and incident management
Good to Have:
- Minimum 3+ years of team management experience
- Experience in Containers and orchestration layers like Kubernetes, Mesos/Marathon
- Proven experience in programming & diagnostics in any languages like Go, Python, Java
- Experience in NoSQL/SQL technologies like Cassandra/MySQL/CouchBase etc.
- Experience in BigData technologies like Kafka/Hadoop/Airflow/Spark
- Is a die-hard sports fan

