

Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments

About Springer Capital
About
Company social profiles
Similar jobs
Job Title: Senior DevOps Engineer
Location: Sector 39, Gurgaon (Onsite)
Employment Type: Full-Time
Working Days: 6 Days (Alternate Saturdays Working)
Experience Required: 5+ Years
Team Role: Lead & Mentor a team of 3–4 engineers
About the Role
We are seeking a highly skilled Senior DevOps Engineer to lead our infrastructure and automation initiatives while mentoring a small team. This role involves setting up and managing physical and cloud-based servers, configuring storage systems, and implementing automation to ensure high system availability and reliability. The ideal candidate will have strong Linux administration skills, hands-on experience with DevOps tools, and the leadership capabilities to guide and grow the team.
Key Responsibilities
Infrastructure & Server Management (60%)
- Set up, configure, and manage bare-metal (physical) servers as well as cloud-based environments.
- Configure network bonding, firewalls, and system security for optimal performance and reliability.
- Implement and maintain high-availability solutions for mission-critical systems.
Queue Systems (Kafka / RabbitMQ) (15%)
- Deploy and manage message queue systems to support high-throughput, real-time data exchange.
- Ensure reliable event-driven communication between distributed services.
Storage Systems (SAN/NAS) (15%)
- Configure and manage Storage Area Networks (SAN) and Network Attached Storage (NAS).
- Optimize storage performance, redundancy, and availability.
Database Administration (5%)
- Administer and optimize MariaDB, MySQL, MongoDB, Redis, and Elasticsearch.
- Handle backup, recovery, replication, and performance tuning.
General DevOps & Automation
- Deploy product updates, patches, and fixes while ensuring minimal downtime.
- Design and manage CI/CD pipelines using Jenkins or similar tools.
- Administer and automate workflows with Docker, Kubernetes, Ansible, AWS, and Git.
- Manage web and application servers (Apache httpd, Tomcat).
- Implement monitoring, logging, and alerting systems (Nagios, HAProxy, Keepalived).
- Conduct root cause analysis and implement automation to reduce manual interventions.
- Mentor a team of 3–4 engineers, fostering best practices and continuous improvement.
Required Skills & Qualifications
✅ 5+ years of proven DevOps engineering experience
✅ Strong expertise in Linux administration & shell scripting
✅ Hands-on experience with bare-metal server management & storage systems
✅ Proficiency in Docker, Kubernetes, AWS, Jenkins, Git, and Ansible
✅ Experience with Kafka or RabbitMQ in production environments
✅ Knowledge of CI/CD, automation, monitoring, and high-availability tools (Nagios, HAProxy, Keepalived)
✅ Excellent problem-solving, troubleshooting, and leadership abilities
✅ Strong communication skills with the ability to mentor and lead teams
Good to Have
- Experience in Telecom projects involving SMS, voice, or real-time data handling.

Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
- Seeking an Individual carrying around 5+ yrs of experience.
- Must have skills - Jenkins, Groovy, Ansible, Shell Scripting, Python, Linux Admin
- Terraform, AWS deep knowledge to automate and provision EC2, EBS, SQL Server, cost optimization, CI/CD pipeline using Jenkins, Server less automation is plus.
- Excellent writing and communication skills in English. Enjoy writing crisp and understandable documentation
- Comfortable programming in one or more scripting languages
- Enjoys tinkering with tooling. Find easier ways to handle systems by doing some research. Strong awareness around build vs buy.
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.


Roles and Responsibilities:
• Gather and analyse cloud infrastructure requirements
• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby
preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and
discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,
CloudFormation), and automated imaging tools (Packer)
• Support existing infrastructure, analyse problem areas and come up with solutions
• An eye for monitoring – the candidate should be able to look at complex infrastructure and be
able to figure out what to monitor and how.
• Work along with the Engineering team to help out with Infrastructure / Network automation needs.
• Deploy infrastructure as code and automate as much as possible
• Manage a team of DevOps
Desired Profile:
• Understanding of provisioning of Bare Metal and Virtual Machines
• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.
• Experience in scripting languages like Ruby/ Python/ Shell Scripting
• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts
• Strong Linux/Unix administration skills.
• Self-starter who can implement with minimal guidance
• Hands-on experience setting up CICD from SCRATCH in Jenkins
• Experience with Managing K8s infrastructure
Q2 is seeking a team-focused Lead Release Engineer with a passion for managing releases to ensure we release quality software developed using Agile Scrum methodology. Working within the Development team, the Release Manager will work in a fast-paced environment with Development, Test Engineering, IT, Product Management, Design, Implementations, Support and other internal teams to drive efficiencies, transparency, quality and predictability in our software delivery pipeline.
RESPONSIBILITIES:
- Provide leadership on cross-functional development focused software release process.
- Management of the product release cycle to new and existing clients including the build release process and any hotfix releases
- Support end-to-end process for production issue resolution including impact analysis of the issue, identifying the client impacts, tracking the fix through dev/testing and deploying the fix in various production branches.
- Work with engineering team to understand impacts of branches and code merges.
- Identify, communicate, and mitigate release delivery risks.
- Measure and monitor progress to ensure product features are delivered on time.
- Lead recurring release reporting/status meetings to include discussion around release scope, risks and challenges.
- Responsible for planning, monitoring, executing, and implementing the software release strategy.
- Establish completeness criteria for release of successfully tested software component and their dependencies to gate the delivery of releases to Implementation groups
- Serve as a liaison between business units to guarantee smooth and timely delivery of software packages to our Implementations and Support teams
- Create and analyze operational trends and data used for decision making, root cause analysis and performance measurement.
- Build partnerships, work collaboratively, and communicate effectively to achieve shared objectives.
- Make Improvements to processes to improve the experience and delivery for internal and external customers.
- Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to.
EXPERIENCE AND KNOWLEDGE:
- Bachelor’s degree in Computer Science, or related field or equivalent experience.
- Minimum 4 years related experience in product release management role.
- Excellent understanding of software delivery lifecycle.
- Technical Background with experience in common Scrum and Agile practices preferred.
- Deep knowledge of software development processes, CI/CD pipelines and Agile Methodology
- Experience with tools like Jenkins, Bitbucket, Jira and Confluence.
- Familiarity with enterprise software deployment architecture and methodologies.
- Proven ability in building effective partnership with diverse groups in multiple locations/environments
- Ability to convey technical concepts to business-oriented teams.
- Capable of assessing and communicating risks and mitigations while managing ambiguity.
- Experience managing customer and internal expectations while understanding the organizational and customer impact.
- Strong organizational, process, leadership, and collaboration skills.
- Strong verbal, written, and interpersonal skills.

As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening.
- Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.
PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible
- Strong Understanding of Linux administration
- Good understanding of using Python or Shell scripting (Automation mindset is key in this role)
- Hands on experience with Implementation of CI/CD Processes
Experience working with one of these cloud platforms (AWS, Azure or Google Cloud) - Experience working with configuration management tools such as Ansible, Chef
Experience in Source Control Management including SVN, Bitbucket and GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus
Troubleshoot and triage development and Production issues - Understanding of micro-services is a plus
Roles & Responsibilities
- Implementation and troubleshooting on Linux technologies related to OS, Virtualization, server and storage, backup, scripting / automation, Performance fine tuning
- LAMP stack skills
- Monitoring tools deployment / management (Nagios, New Relic, Zabbix, etc)
- Infra provisioning using Infra as code mindset
- CI/CD automation

