
Key Responsibilities
DevOps Strategy & Leadership
- Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
- Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
- Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.
CI/CD & Infrastructure Automation
- Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
- Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
- Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.
Cloud & On‑Prem Infrastructure Management
- Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
- Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
- Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.
Performance Monitoring & Optimization
- Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
- Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
- Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.
Security & Compliance
- Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
- Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
- Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.
Required Skills & Qualifications
- 10+ years of DevOps experience, with 5+ years in a leadership capacity.
- Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong command of AWS, GCP, and hybrid cloud infrastructures.
- Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
- Advanced proficiency in Terraform, Helm, and overall IaC workflows.
- Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
- Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
- Excellent scripting skills in Python, Bash, or Go for automation and tooling.
- Deep understanding of security principles, encryption, IAM, and compliance frameworks.
Good to Have
- Experience with ultra-low-latency or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
- Familiarity with Redis, Nginx, or other high‑throughput systems.
- Exposure to micro‑second‑level performance tuning or network acceleration technologies.
Why Join Us?
- Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
- A culture where innovation, ownership, and bold thinking are valued.
- Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
- Build systems that influence markets and redefine the fintech landscape.
This isn’t just a role—it’s a challenge, a platform, and a proving ground.
Ready to step up? Apply now.

About Tradelab Technologies
About
TradeLab is a leading fintech technology provider, delivering cutting-edge solutions to brokers, banks, and fintech platforms. Our portfolio includes high-performance Order & Risk Management Systems (ORMS), seamless MetaTrader integrations, AI-driven customer engagement platforms such as PULSE LLaVA, and compliance-grade risk management solutions. With a proven track record of successful deployments at top-tier brokerages and financial institutions, TradeLab combines scalability, regulatory alignment, and innovation to redefine digital broking and empower clients in the capital markets ecosystem.
Candid answers by the company
Bangalore/Mumbai
Similar jobs
Job Title: DevOps Engineer
Location: Mumbai
Experience: 2–4 Years
Department: Technology
About InCred
InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]
Role Overview
As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.
Key Responsibilities
- Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
- CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
- Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
- Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
- Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
- Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
- Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
- Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]
Required Skills
- 2–4 years of hands-on experience in DevOps roles.
- Strong knowledge of Linux administration and shell scripting (Bash/Python).
- Experience with AWS services and cloud architecture.
- Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
- Familiarity with Docker, Kubernetes, and container orchestration.
- Knowledge of Terraform or similar IaC tools.
- Understanding of networking, security, and performance tuning.
- Exposure to monitoring tools (Prometheus, Grafana) and log management.
Preferred Qualifications
- Experience in financial services or fintech environments.
- Knowledge of microservices architecture and enterprise-grade SaaS setups.
- Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).
Why Join InCred?
- Culture: High-performance, ownership-driven, and innovation-focused environment.
- Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
- Rewards: Competitive compensation, ESOPs, and performance-based incentives.
- Impact: Be part of a mission-driven organization transforming India’s credit landscape.
We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
- Strong in performing production operation activities even at night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality
- Ability to manage and deliver multiple project phases at the same time
I Qualification(s):
- Experience in working with Linux Server, DevOps tools, and Orchestration tools
- Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add
II Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years of experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
- Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments is a value add
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
- Code Quality like SonarQube is a value-add
- Test Automation like Selenium, JMeter, and JUnit is a value-add
- Experience in Heroku and OpenStack is a value-add
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the on-prem and off-prem environment
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
IV Age Group: 21 – 36 Years
V Cost to the Company: As per industry standards
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Hiring for the below position with one of our premium client
Role: Senior DevOps Engineer
Exp:7+ years
Location: Chennai
Key skills: DevOps, Cloud, Python scripting
Description:
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
7-9 years overall and at least 3-4 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere
o Good understanding and familiarity with Linux o Networking knowledge – Firewalls, VPNs, Load Balancers o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana o Git, Jenkins, Jira
If interested kindly apply!
Sr.DevOps Engineer (5 to 8 yrs. Exp.)
Location: Ahmedabad
- Strong Experience in Infrastructure provisioning in cloud using Terraform & AWS CloudFormation Templates.
- Strong Experience in Serverless Containerization technologies such as Kubernetes, Docker etc.
- Strong Experience in Jenkins & AWS Native CI/CD implementation using code
- Strong Experience in Cloud operational automation using Python, Shell script, AWS CLI, AWS Systems Manager, AWS Lamnda, etc.
- Day to Day AWS Cloud administration tasks
- Strong Experience in Configuration management using Ansible and PowerShell.
- Strong Experience in Linux and any scripting language must required.
- Knowledge of Monitoring tool will be added advantage.
- Understanding of DevOps practices which involves Continuous Integration, Delivery and Deployment.
- Hands on with application deployment process
Key Skills: AWS, terraform, Serverless, Jenkins,Devops,CI/CD,Python,CLI,Linux,Git,Kubernetes
Role: Software Developer
Industry Type: IT-Software, Software Services
FunctionalArea:ITSoftware- Application Programming, Maintenance
Employment Type: Full Time, Permanent
Education: Any computer graduate.
Salary: Best in Industry.Key Responsibilities:
- Work with the development team to plan, execute and monitor deployments
- Capacity planning for product deployments
- Adopt best practices for deployment and monitoring systems
- Ensure the SLAs for performance, up time are met
- Constantly monitor systems, suggest changes to improve performance and decrease costs.
- Ensure the highest standards of security
Key Competencies (Functional):
- Proficiency in coding in atleast one scripting language - bash, Python, etc
- Has personally managed a fleet of servers (> 15)
- Understand different environments production, deployment and staging
- Worked in micro service / Service oriented architecture systems
- Has worked with automated deployment systems – Ansible / Chef / Puppet.
- Can write MySQL queries
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events

- Provides free and subscription-based website and email services hosted and operated at data centres in Mumbai and Hyderabad.
- Serve global audience and customers through sophisticated content delivery networks.
- Operate a service infrastructure using the latest technologies for web services and a very large storage infrastructure.
- Provides virtualized infrastructure, allows seamless migration and the addition of services for scalability.
- Pioneers and earliest adopters of public cloud and NoSQL big data store - since more than a decade.
- Provide innovative internet services with work on multiple technologies like php, java, nodejs, python and c++ to scale our services as per need.
- Has Internet infrastructure peering arrangements with all the major and minor ISPs and telecom service providers.
- Have mail traffic exchange agreements with major Internet services.
Job Details :
- This job position provides competitive professional opportunity both to experienced and aspiring engineers. The company's technology and operations groups are managed by senior professionals with deep subject matter expertise.
- The company believes having an open work environment offering mentoring and learning opportunities with an informal and flexible work culture, which allows professionals to actively participate and contribute to the success of our services and business.
- You will be part of a team that keeps the business running for cloud products and services that are used 24- 7 by the company's consumers and enterprise customers around the world. You will be asked to contribute to operate, maintain and provide escalation support for the company's cloud infrastructure that powers all of cloud offerings.
Job Role :
- As a senior engineer, your role grows as you gain experience in our operations. We facilitate a hands-on learning experience after an induction program, to get you into the role as quickly as possible.
- The systems engineer role also requires candidates to research and recommend innovative and automated approaches for system administration tasks.
- The work culture allows a seamless integration with different product engineering teams. The teams work together and share responsibility to triage in complex operational situations. The candidate is expected to stay updated on best practices and help evolve processes both for resilience of services and compliance.
- You will be required to provide support for both, production and non-production environments to ensure system updates and expected service levels. You will be required to specifically handle 24/7 L2 and L3 oversight for incident responses and have an excellent understanding of the end-to-end support process from client to different support escalation levels.
- The role also requires a discipline to create, update and maintain process documents, based on operation incidents, technologies and tools used in the processes to resolve issues.
QUALIFICATION AND EXPERIENCE :
- A graduate degree or senior diploma in engineering or technology with some or all of the following:
- Knowledge and work experience with KVM, AWS (Glacier, S3, EC2), RabbitMQ, Fluentd, Syslog, Nginx is preferred
- Installation and tuning of Web Servers, PHP, Java servlets, memory-based databases for scalability and performance
- Knowledge of email related protocols such as SMTP, POP3, IMAP along with experience in maintenance and administration of MTAs such as postfix, qmail, etc will be an added advantage
- Must have knowledge on monitoring tools, trend analysis, networking technologies, security tools and troubleshooting aspects.
- Knowledge of analyzing and mitigating security related issues and threats is certainly desirable.
- Knowledge of agile development/SDLC processes and hands-on participation in planning sprints and managing daily scrum is desirable.
- Preferably, programming experience in Shell, Python, Perl or C.










