50+ Linux/Unix Jobs in India
Apply to 50+ Linux/Unix Jobs on CutShort.io. Find your next job, effortlessly. Browse Linux/Unix Jobs and apply today!
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in computer science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools.
Why Join Us?
• Be part of a team supporting mission-critical systems in real-time.
• Work in a high-energy, tech-driven environment.
• Opportunities to grow into domain/tech leadership roles.
• Competitive salary and benefits, health coverage, and employee wellness programs.
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
Responsibilities: 1. Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models 2. Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes 3. Automate the training, testing and deployment processes for machine learning models 4. Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability 5. Implement best practices for version control, model reproducibility and governance 6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness 7. Troubleshoot and resolve issues related to model deployment and performance 8. Ensure compliance with security and data privacy standards in all MLOps activities 9. Keep up to date with the latest MLOps tools, technologies and trends 10. Provide support and guidance to other team members on MLOps practices
Required skills and experience: • 3-10 years of experience in MLOps, DevOps or a related field • Bachelor’s degree in computer science, Data Science or a related field • Strong understanding of machine learning principles and model lifecycle management • Experience in Jenkins pipeline development • Experience in automation scripting
Senior C++/Qt Backend Engineer
(High-Performance Systems)
Location: Noida (On-site)
Introduction: Who We Are
We are a lean, product-based startup building the next generation of industrial robotics. Our products are deployed in critical, high-stakes environments, including Railways, Oil & Gas, Chemicals & Fertilizers, and Offshore operations.
We are not just writing code; we are building rugged, intelligent machines that operate in the real world.
1. The Mission (Pure Backend Focus)
You will architect the high-performance C++ Backend of our robotics software.
- No UI Work: You will NOT be designing UI pixels or writing QML front-end code.
- The Engine: Your mission is to build the “invisible engine” that processes 50 Mbps of raw scientific data and feeds it efficiently to the UI layer.
- Ownership: You own the threads, the data structures, and the logic.
2. Critical Outcomes (The First 4 Months)
- Architect the Data Ingestion Layer:
- Design a C++ backend capable of ingesting 50 Mbps of live sensor data (from embedded hardware) without dropping packets or consuming excessive CPU.
- Decouple Backend from UI:
- Implement Ring Buffers and Lock-Free Queues to separate high-speed data acquisition threads from the main Qt Event Loop, ensuring the backend never freezes the UI.
- Crash-Proof Concurrency:
- Refactor the threading model to eliminate Race Conditions and Deadlocks using proper synchronization (Mutexes/Semaphores) or lock-free designs.
- Efficient IPC Implementation:
- Establish robust Inter-Process Communication (Shared Memory / Sockets) to allow the C++ backend to exchange data with other Linux processes instantly.
3. Strategic Outcomes (Months 5 Onward)
As the product matures, your focus will shift from “Building the Engine” to “Hardening and Scaling the Ecosystem.”
- Robust OTA & Redundancy:
- Implement Linux A/B Partitioning strategies. You will design the fallback mechanism where the system uses atomic updates to revert to the last known good configuration in case of an update failure, ensuring high availability in remote offshore locations.
- Containerized Deployment:
- Move from manual builds to automated deployment. You will containerize the application (Docker / Podman) and integrate it with Jenkins / GitLab CI to enable seamless remote deployment to the robot fleet.
- Remote Diagnostics Engine:
- Build the internal logic to capture, compress, and transmit critical system logs and core dumps securely to the cloud without saturating the robot’s bandwidth.
- Fleet Monitoring Infrastructure:
- Distinct from simple logging, you will architect the heartbeat and telemetry protocols that allow our central command to monitor the health of robots deployed in railways and chemical plants in real time.
4. Competencies (Must-Haves)
- Qt Core (Backend Only):
- Expert in QObject, QThread, QEventLoop, and Signal/Slot mechanisms. You understand how to push data to QML, but you don’t style it.
- High-Performance C++:
- You handle data at the byte level, preferring Circular Buffers (Ring Buffers) over standard vectors for streams.
- Concurrency Mastery:
- You know when to use Lock-Free programming to avoid thread contention and can manage interactions between Data Acquisition and Processing threads without bottlenecks.
- Design Patterns:
- Competence in Producer-Consumer (for streams), Singleton (hardware managers), and Factory patterns.
- Linux System Programming:
- Comfortable with IPC (Shared Memory, Unix Domain Sockets) and optimizing process priorities.
5. The “Squad” (Your Team)
- Embedded Engineers:
- They push the raw 50 Mbps stream to the OS; you write the drivers to catch it.
- UI / Frontend Developers:
- They handle QML / UX; you provide the data APIs they need.
- Robotics (ROS) Engineers:
- You ensure their heavy algorithms don’t starve your data acquisition threads.
- Testers:
- You ensure your code stands up to their stress testing.
6. Why This Role Defines Your Career
- Deep Backend Engineering:
- Escape the “button styling” trap. This is 100% logic, memory management, and architecture.
- Real Engineering Problems:
- Solve race conditions, memory leaks, and high-velocity data streams.
- Architectural Autonomy:
- You decide how the data moves and choose the patterns. You own the “Engine Room.”
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Job Summary
We are looking for a Marketing Data Engineering Specialist who can manage our real-estate
lead delivery pipelines, integrate APIs, automate data workflows, and support performance
marketing with accurate insights. The ideal candidate understands marketing funnels and has
strong skills in API integrations, data analysis, automation, and server deployments.
Key Responsibilities
Manage inbound/outbound lead flows through APIs, webhooks, and sheet-based
integrations.
Clean, validate, and automate datasets using Python, Excel, and ETL workflows.
Analyse lead feedback (RNR, NT, QL, SV, Booking) and generate actionable insights.
Build and maintain automated reporting dashboards.
Deploy Python scripts/notebooks on Linux servers and monitor cron jobs/logs.
Work closely with marketing, client servicing, and data teams to improve lead quality
and campaign performance.
Required Skills
Python (Pandas, API requests), Advanced Excel, SQL
REST APIs, JSON, authentication handling
Linux server deployment (cron, logs)
Data visualization tools (Excel, Google Looker Studio preferred)
Strong understanding of performance marketing metrics and funnels
Qualifications
Bachelor’s degree in Engineering/CS/Maths/Statistics/Marketing Analytics or related
field.
Minimum 3 years of experience in marketing analytics, data engineering, or
marketing operations.
Preferred Traits
Detail-oriented, analytical, strong problem-solver
Ability to work in fast-paced environments
Good communication and documentation skills
Description
We are seeking a skilled and detail-oriented Member of Technical Staff focusing on Network Infrastructure, Linux Administration and Automation. The role involves managing and maintaining Linux-based systems and infrastructure, automating repetitive tasks, and ensuring smooth operation.
Requirements
- In-depth experience with Linux systems (configuration, troubleshooting, networking, and administration)
- Network infrastructure management knowledge. CCNA/CCNP or an equivalent certification is a plus
- Scripting skills in at least one language (e.g., Bash, Python, Go).
- Knowledge of version control systems like Git and experience with branching, merging, and tagging workflows
- Experience with virtualization technologies such as Proxmox or VMWare, including the design, implementation, and management of virtualized infrastructures. Understanding of virtual machine provisioning, resource management, and performance optimization in virtual environments.
- Experience with containerization technologies like Docker
- Familiarity with monitoring and logging tools.
- Experience with end point security.
Responsibilities
- Network Infrastructure Management: Configure, manage, and troubleshoot routers, switches, firewalls, and wireless networks, Maintain and optimize network performance to ensure reliability and security.
- Linux Administration: Manage and maintain Linux-based systems, ensuring high availability and performance.
- Infrastructure Management: Managing servers, networks, storage, and other infrastructure components, capacity planning, and disaster recovery.
- Automation: Scripting (Bash, Python, Golang, etc.), configuration management (Ansible, Puppet, Chef).
- Virtualization: Design, implement, and manage virtualized environments, ensuring optimal performance and resource efficiency.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.
Hiring DevOps Engineers (Freelance)
We’re hiring for our client: Biz-Tech Analytics
Role: DevOps Engineer (Freelance)
Experience: 4-7+ years
Project: Terminus Project
Location: Remote
Engagement Type: Freelance | Project-based
About the Role:
Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.
This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.
Key Responsibilities:
• Work on Linux-based systems, handling process management, file systems, and system utilities
• Write clean, testable Python code for automation and verification
• Build, configure, and manage Docker-based environments for testing and deployment
• Troubleshoot and debug complex system and software issues
• Collaborate using Git and GitHub workflows, including pull requests and branching
• Execute tasks independently and iterate based on structured feedback
Required Skills & Qualifications:
• Expert-level proficiency with Linux CLI, including Bash scripting
• Strong Python programming skills for automation and tooling
• Hands-on experience with Docker and containerized environments
• Excellent problem-solving and debugging skills
• Proficiency with Git and standard GitHub workflows
Preferred Qualifications:
• Professional experience in DevOps or Site Reliability Engineering (SRE)
• Exposure to cloud platforms such as AWS, GCP, or Azure
• Familiarity with machine learning frameworks like TensorFlow or PyTorch
• Prior experience contributing to open-source projects
Engagement Details
• Fully remote freelance engagement
• Flexible workload, with scope to take on additional tasks
• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects
Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8
About Biz-Tech Analytics:
Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.
With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.
Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.
As an Engineering Manager, you'll lead efforts to strengthen and optimize our state-of-the-art systems, ensuring high performance, scalability, and efficiency across our suite of trading solutions.
The core responsibilities for the job include the following:
Technical Expertise:
- C++ coding and debugging to strengthen and optimize systems.
- Design and architecture (HLD/LLD) to ensure scalable and robust solutions.
- Implementing and enhancing DevOps, Agile, and CI/CD pipelines to improve development workflows.
- Managing escalations and ensuring high-quality customer outcomes.
Architecture and Design:
- Define and refine the architectural vision and technical roadmap for enterprise software solutions.
- Design scalable, maintainable, and secure systems in line with business goals.
- Collaborate with stakeholders to translate requirements into technical solutions.
- Driving engineering initiatives to foster innovation, efficiency, and excellence.
Project Management:
- Oversee project timelines, deliverables, and quality assurance processes.
- Coordinate cross-functional teams to ensure seamless integration of systems.
- Identify risks and proactively implement mitigation strategies.
Technical Leadership:
- Lead and mentor a team of engineers, fostering a collaborative and high-performance culture.
- Provide technical direction and guidance on complex software engineering challenges.
- Drive code quality, best practices, and standards across the engineering team.
Requirements:
- 10-15 years in the tech industry, with 2-4 years in technical leadership or managerial roles.
- Technical Expertise: Expertise in C++ development, enterprise architecture, and scalable system design, and proficiency in performance optimization, scalability, software architecture, and networking principles.
- Extensive experience managing the full development lifecycle of large-scale software products, from concept to deployment.
- Strong knowledge of STL containers, multi-threading concepts, and algorithms.
- Solid understanding of memory management and efficient resource utilization.
- Microservices Architecture Expertise: Experience in designing and implementing scalable, reliable microservices.
- Strong Communication and Decision-making skills: Ability to clearly articulate trade-offs, make informed decisions, and ensure alignment across stakeholders.
- Commitment to Creating and fostering Engineering Excellence: Deep understanding of best practices, including code quality, testability, security, and release management, and passion for fostering a strong engineering culture and continuously improving developer workflows and tools.
- Self-Driven and Motivated: Ability to operate independently while driving impactful results.
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Backend Developer (Django)
About the Role:
We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.
Key Responsibilities:
- Develop and maintain Python-based web applications using Django and Django Rest Framework.
- Build and integrate RESTful APIs.
- Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
- Contribute to improving development workflows through automation.
- Assist in deploying applications using cloud platforms like Heroku or AWS.
- Write clean, maintainable, and efficient code.
Requirements:
Backend:
- Strong understanding of Django and Django Rest Framework (DRF).
- Experience with task queues like Celery.
Frontend (Basic Understanding):
- Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.
Hosting & Deployment:
- Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.
Linux/Server Knowledge:
- Basic to intermediate understanding of Linux commands and server environments.
- Ability to work with terminal, virtual environments, SSH, and basic server configurations.
Python Knowledge:
- Good grasp of OOP concepts.
- Familiarity with Pandas for data manipulation is a plus.
Soft & Team Skills:
- Strong collaboration and team management abilities.
- Ability to work in a team-driven environment and coordinate tasks smoothly.
- Problem-solving mindset and attention to detail.
- Good communication skills and eagerness to learn
What We Offer:
- A collaborative, friendly, and growth-focused work environment.
- Opportunity to work on real-time projects using modern technologies.
- Guidance and mentorship to help you advance in your career.
- Flexible and supportive work culture.
- Opportunities for continuous learning and skill development.
Location : Bhayander (Onsite)
Immediate to 30-day joiner and Mumbai-based candidate preferred.
Dear Candidate
Candidate must have:
- Minimum 3-5 years of experience working as a NOC Engineer / Senior NOC Engineer in the telecom/Product (preferably telecom monitoring) industry.
- BE in CS, EE, or Telecommunications from a recognized university.
- Knowledge of NOC Process
- Technology exposure towards Telecom – 5G,4G,IMS with a solid understanding of Telecom Performance KPI’s, and/or Radio Access Network. Knowledge of call flows will be advantage
- Experience with Linux OS and SQL – mandatory.
- Residence in Delhi – mandatory.
- Ready to work in a 24×7 environment.
- Ability to monitor alarms based on our environment.
- Capability to identify and resolve issues occurring in the RADCOM environment.
- Any relevant technical certification will be an added advantage.
Responsibilities:
- Based in RADCOM India offices, Delhi.
- Responsible for all NOC Monitoring and technical support -T1/T2 aspects required by the process for RADCOM’s solutions.
- Ready to participate under Customer Planned activities / execution and monitoring.
Artificial Intelligence Research Intern
We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact.
Key Responsibilities:
• Research, design, develop, and implement AI and Deep Learning algorithms.
• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and
data extraction.
• Evaluate and optimize machine learning and deep learning models.
• Collect, process, and analyze large-scale datasets.
• Use advanced techniques for text representation and classification.
• Write clean, efficient, and testable code for production-ready applications.
• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.).
• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences.
Required Skills and Experience:
• Theoretical and practical knowledge of AI, ML, and DL concepts.
• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc.
• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs).
• Familiarity with data structures, data modeling, and software architecture.
• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.).
• Comfortable working in Linux/UNIX environments.
• Basic knowledge of HTML, JavaScript, HTTP, and Networking.
• Strong communication skills and a collaborative mindset.
Job Type: Full-Time Internship
Location: In-Office (Bhayander)
About Phi Commerce
Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.
Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.
Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.
Awards & Recognitions:
The company innovative work has been recognized at prestigious forums in short span of its existence:
- Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
- Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
- Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
- Winner of NPCI IDEATHON on Blockchain in Payments
- Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors
About the role:
As an SDET, you will work closely with the development, product, and QA teams to ensure the delivery of high-quality, reliable, and scalable software. You will be responsible for creating and maintaining automated test suites, designing testing frameworks, and identifying and resolving software defects. The role will also involve continuous improvement of the test process and promoting best practices in software development and testing.
Key Responsibilities:
- Develop, implement, and maintain automated test scripts for validating software functionality and performance.
- Design and develop testing frameworks and tools to improve the efficiency and effectiveness of automated testing.
- Collaborate with developers, product managers, and QA engineers to identify test requirements and create effective test plans.
- Write and execute unit, integration, regression, and performance tests to ensure high-qualitycode.
- Troubleshoot and debug issues identified during testing, working with developers to resolve them in a timely manner.
- Conduct code reviews to ensure code quality, maintainability, and testability.
- Work with CI/CD pipelines to integrate automated testing into the development process.
- Continuously evaluate and improve testing strategies, identifying areas for automation and optimization.
- Monitor the quality of releases by tracking test coverage, defect trends, and other quality metrics.
- Ensure that all tests are documented, maintainable, and reusable for future software releases.
- Stay up-to-date with the latest trends, tools, and technologies in the testing and automation space.
Skills and Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 6+ years of experience as an SDET, software engineer, or quality engineer with a focus on test automation.
- Strong experience in automated testing frameworks and tools (e.g., Selenium, Appium JUnit, TestNG, Cucumber).
- Proficiency in programming languages with Java
- Experience in designing and implementing test automation for web applications, APIs, and mobile applications.
- Strong understanding of software testing methodologies and processes (e.g., Agile, Scrum).
- Excellent problem-solving skills and attention to detail.
- Good communication and collaboration skills, with the ability to work effectively in a team.
- Knowledge of performance testing and load testing tools is a plus (e.g., JMeter, LoadRunner)
- Experience with test management tools (e.g., TestRail, Jira).
- Knowledge of databases and ability to write SQL queries to validate test data.
- Experience in API testing and knowledge of RESTful web services.
We are seeking an experienced and highly skilled Backend Java Engineer to join our team.
The ideal candidate will have strong expertise in Core Java, Spring Boot, Microservices, and building high-performance, scalable backend applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services using Java, Spring Boot, and Spring Framework.
- Design and implement RESTful APIs for seamless integrations.
- Contribute to architectural decisions involving microservices, APIs, and cloud-based solutions.
- Write clean, efficient, and reusable code following coding standards and best practices.
- Optimize application performance and participate in debugging and troubleshooting sessions.
- Collaborate with architects, product managers, and QA engineers to deliver high-quality releases.
- Conduct peer code reviews and ensure adherence to engineering best practices.
- Mentor junior engineers and support their technical growth where required.
Skills & Requirements
- Minimum 2 years of hands-on backend development experience.
- Strong proficiency in:
- Core Java / Java 8
- Spring Boot, Spring Framework
- Microservices architecture
- REST APIs
- Experience with:
- Kafka (preferred)
- MySQL or other relational databases
- Batch processing, application performance tuning, caching strategies
- Web security / application security
- Solid understanding of software design principles and scalable system design.
Preferred
- Male candidates preferred (client-mentioned requirement).
- Experience working in fintech, payments, or high-scale production environments
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Dear Candidate
Looking for Telecom Advance Support Engineer (PSO)
If this opportunity hits you Kindly revert.
Job Description:
3+ years’ experience working as a support engineer / Network Engineer /deployment Engineer/ Solutions Engineer/integration Engineer in the telecom deployment industry.
■ BE/ B.Sc. in CS, EE, Telecommunications graduate with honors from an elite university – (70%- 90% depending on the colleges. Delhi university a plus).
■ Telecom Knowledge (IMS, 4G) - Mandatory.
■ Linux OS, SQL knowledge – Mandatory. Vertica DB, scripting - a plus.
■ Open stack / Cloud based infrastructure (OpenStack/Kubernetes), knowledge is a plus
■ Be available to work off business hours to address critical matters/situations based on Radcom’s on-call support model
■ Willing to work in evening and night shifts and Weekend.
■ Fluent English – Mandatory
■ Valid passport
Visible to teammates
Add private requirements that your team and AI Resume Review will use to evaluate candidates.
Public: Show on job boards
■ Based in RADCOM India offices, Delhi, India.
■ Responsible for all the technical support aspects required by the client for RADCOM’s solutions, including integration, deployment, and customization of applications; and KPI reports for individual customer needs.
■ Support and deployment of Radcom’s solution at cloud-based customer environments (case to case basis)
■ Having very good customer interaction interface and able to drive customer calls independently.
■ Able to drive and deliver internal customer small project - end to end handling along with taking care of all internal required communications.
■ Working closely with the management on customer updates and future plan
■ Daily maintenance and problem resolution, System patches and software upgrades, and routine system configuration
■ Identifies, diagnoses, and resolves issues related to System with good troubleshooting and root cause finding.
■ If required: travel to on-site support outside India, training, installation, and configuration, etc
Thanks & Regards
Shreya Tiwari
Technical Recruiter - HR
RADCOM
Review Criteria
- Strong IT Engineer Profile
- 4+ years of hands-on experience in Azure/Office 365 compliance and management, including policy enforcement, audit readiness, DLP, security configurations, and overall governance.
- Must have strong experience handling user onboarding/offboarding, identity & access provisioning, MFA, SSO configurations, and lifecycle management across Windows/Mac/Linux environments.
- Must have proven expertise in IT Inventory Management, including asset tracking, device lifecycle, CMDB updates, and hardware/software allocation with complete documentation.
- Hands-on experience configuring and managing FortiGate Firewalls, including routing, VPN setups, policies, NAT, and overall network security.
- Must have practical experience with FortiGate WiFi, AP configurations, SSID management, troubleshooting connectivity issues, and securing wireless environments.
- Must have strong knowledge and hands-on experience with Antivirus Endpoint Central (or equivalent) for patching, endpoint protection, compliance, and threat remediation.
- Must have solid understanding of Networking, including routing, switching, subnetting, DHCP, DNS, VPN, LAN/WAN troubleshooting.
- Must have strong troubleshooting experience across Windows, Linux, and macOS environments for system issues, updates, performance, and configurations.
- Must have expertise in Cisco/Polycom A/V solutions, including setup, configuration, video conferencing troubleshooting, and meeting room infrastructure support.
- Must have hands-on experience in Shell Scripting / Bash / PowerShell for automation of routine IT tasks, monitoring, and system efficiencies.
Job Specific Criteria:
- CV Attachment is mandatory
- Q1. Please share details of experience in troubleshooting (Rate out of 10, 10 being highly experienced) A. Windows Troubleshooting B. Linux Troubleshooting C. Macbook Troubleshooting
- Q2. Please share details of experience in below process (Rate out of 10, 10 being highly experienced) A. User Onboarding/Offboarding B. Inventory Management
- Q3. Please share details of experience in below tools and administrations (Rate out of 10, 10 being highly experienced) A. FortiGate Firewall B. FortiGate WiFi C. Antivirus Endpoint Central D. Networking E. Cisco/Polycom A/V solutions F. Shell Scripting/Bash/PowerShell G. Azure/Office 365 compliance and management
- Q4. Are you okay for F2F round (Noida)?
- Q5. What's you current company?
- Q6. Are you okay for rotational shift (10am - 7pm and 2pm to 11pm)?
Role & Responsibilities:
We are seeking an experienced IT Infrastructure/System Administrator to manage, secure, and optimize our IT environment. The ideal candidate will have expertise in enterprise-grade tools, strong troubleshooting skills, and hands-on experience configuring secure integrations, managing endpoint deployments, and ensuring compliance across platforms.
- Administer and manage Office 365 suite (Outlook, SharePoint, OneDrive, Teams etc) and related services/configurations.
- Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
- Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
- Administer Jira, FortiGate firewalls and Wi-Fi, antivirus solutions, and endpoint management systems.
- Provide network administration: routing, subnetting, VPNs, and firewall configurations.
- Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
- Manage Assets Explorer for device and asset management/inventory.
- Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
- Provide remote support for end-users, ensuring quick resolution of technical issues.
- Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
- Collaborate with internal teams and external vendors to resolve issues and optimize systems.
- Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
Ideal Candidate:
- Proven hands-on experience with:
- Office 365 administration and compliance.
- User onboarding/offboarding processes.
- Compliance, audit, and inventory management tools.
- Jira administration, FortiGate firewall, Wi-Fi, and antivirus solutions.
- Networking fundamentals: subnetting, routing, switching.
- Patch management, updates, and vulnerability remediation across Windows, Linux, and macOS.
- Assets Explorer/inventory management
- Strong troubleshooting, documentation, and communication skills.
Preferred Skills:
- Scripting knowledge in Bash, PowerShell for automation.
- Experience working with Jira and Confluence.
Perks, Benefits and Work Culture:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
About the Role
We are seeking an accomplished DevOps Lead with 12+ years of experience in cloud infrastructure, automation, Blockchain, and CI/CD processes. The DevOps Lead will play a pivotal role in architecting scalable cloud environments, driving automation, ensuring secure deployments, and enabling efficient software delivery pipelines. The role involves working with AWS, Huawei Cloud, Kubernetes, Terraform, blockchain-based infrastructure, and modern DevOps toolchains while providing leadership, technical guidance, and client-facing communication.
Key Responsibilities
Leadership & Team Management
● Lead, mentor, and grow a team of DevOps engineers, setting technical direction and ensuring adherence to best practices.
● Facilitate collaboration across engineering, QA, security, and blockchain development teams.
● Act as the primary technical liaison with clients, managing expectations, requirements, and solution delivery.
Infrastructure Automation & Management
● Architect, implement, and manage infrastructure as code (IaC) using Terraform across multi-cloud environments.
● Standardize environments across AWS, Digital Ocean, Huawei Cloud with a focus on scalability, reliability, and security.
● Manage provisioning, scaling, monitoring, and cost optimization of infrastructure resources.
CI/CD & Automation
● Build, maintain, and optimize CI/CD pipelines supporting multiple applications and microservices.
● Integrate automated testing, static code analysis, and security scans into the pipelines.
● Implement blue-green / canary deployments and ensure zero downtime release strategies.
● Promote DevSecOps by embedding security policies into every phase of the delivery pipeline.
Containerization & Orchestration
● Deploy, manage, and monitor applications on Kubernetes clusters (EKS, CCE, or equivalent).
● Utilize Helm charts, Kustomize, and operators for environment consistency.
● Optimize container performance and manage networking, storage, and secrets.
Monitoring, Logging & Incident Response
● Implement and manage monitoring and alerting solutions (Prometheus, Grafana, ELK, CloudWatch, Loki).
● Define SLOs, SLIs, and SLAs for production systems.
● Lead incident response, root cause analysis, and implement preventative measures.
Governance, Security & Compliance
● Implement best practices for secrets management, key rotation, and role-based access control.
● Integrate vulnerability scanning and security audits into pipelines.
Required Skills & Qualifications
● 12+ years of experience in DevOps, with at least 5+ years in a lead capacity.
● Proven expertise with Terraform and IaC across multiple environments.
● Strong hands-on experience with AWS and Huawei Cloud infrastructure services.
● Deep expertise in Kubernetes cluster administration, scaling, monitoring, and networking.
● Advanced experience designing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or similar.
● Solid background in automated deployments, configuration management, and version control (Git, Ansible, Puppet, or Chef).
● Strong scripting and automation skills (Python, Bash, Go, or similar).
● Proficiency with monitoring/observability tools (Prometheus, Grafana, ELK, CloudWatch, Datadog).
● Strong understanding of blockchain infrastructure, node operations, staking setups, and deployment automation.
● Knowledge of container security, network policies, and zero-trust principles.
● Excellent communication, client handling, and stakeholder management skills with proven ability to present complex DevOps concepts to non-technical audiences.
● Ability to design and maintain highly available, scalable, and fault-tolerant systems in production environments.
Department: S&C – Site Reliability Engineering (SRE)
Experience Required: 4–8 Years
Location: Bangalore / Pune /Mumbai
Employment Type: Full-time
- Provide Tier 2/3 technical product support to internal and external stakeholders.
- Develop automation tools and scripts to improve operational efficiency and support processes.
- Manage and maintain system and software configurations; troubleshoot environment/application-related issues.
- Optimize system performance through configuration tuning or development enhancements.
- Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments.
- Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles.
- Drive automation initiatives for release and deployment processes.
- Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments.
- Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks.
Key Competencies
- Strong analytical, troubleshooting, and critical thinking abilities.
- Excellent cross-functional collaboration skills.
- Strong focus on documentation, process improvement, and system reliability.
- Proactive, detail-oriented, and adaptable in a fast-paced work environment.
Junior DevOps Engineer
Experience: 2–3 years
About Us
We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.
Key Responsibilities
- Support deployment, monitoring, and maintenance of trading and fintech applications.
- Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
- Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
- Troubleshoot and resolve production issues in a fast-paced environment.
- Implement and maintain CI/CD pipelines for continuous integration and delivery.
- Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
- Assist in maintaining compliance with financial industry standards and security best practices.
Required Skills
- 2–3 years of hands-on experience in DevOps or related roles.
- Proficiency in Linux/Unix environments.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Working knowledge of scripting languages (Bash, Python).
- Experience with configuration management tools (Ansible, Puppet, Chef).
- Understanding of networking concepts and security practices.
- Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
- Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).
Preferred Skills
- Experience in fintech, trading, or financial services.
- Knowledge of high-frequency trading systems or low-latency environments.
- Familiarity with financial data protocols and APIs.
- Understanding of regulatory requirements in financial technology.
What We Offer
- Opportunity to work on cutting-edge fintech/trading platforms.
- Collaborative and learning-focused environment.
- Competitive salary and benefits.
- Career growth in a rapidly expanding domain.
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Required Qualifications
● Experience:
○ 5+ years of hands-on experience as a DevOps Engineer or similar role, with
proven expertise in building and customizing Helm charts from scratch (not just
using pre-existing ones).
○ Demonstrated ability to design and whiteboard DevOps pipelines, including
CI/CD workflows for microservices applications.
○ Experience packaging and deploying applications with stateful dependencies
(e.g., databases, persistent storage) in varied environments: on-prem (air-gapped
and non-air-gapped), single-tenant cloud, multi-tenant cloud, and developer trials.
○ Proficiency in managing deployments in Kubernetes clusters, including offline
installations, upgrades via Helm, and adaptations for client restrictions (e.g., no
additional tools or VMs).
○ Track record of handling client interactions, such as asking probing questions
about infrastructure (e.g., OS versions, storage solutions, network restrictions)
and explaining technical concepts clearly.
● Technical Skills:
○ Strong knowledge of Helm syntax and functionalities (e.g., Go templating, hooks,
subcharts, dependency management).
○ Expertise in containerization with Docker, including image management
(save/load, registries like Harbor or ECR).
○ Familiarity with CI/CD tools such as Jenkins, ArgoCD, GitHub Actions, and
GitOps for automated and manual deployments.
○ Understanding of storage solutions for on-prem and cloud, including object/file
storage (e.g., MinIO, Ceph, NFS, cloud-native like S3/EBS).
○ In-depth knowledge of Kubernetes concepts: StatefulSets, PersistentVolumes,
namespaces, HPA, liveness/readiness probes, network policies, and RBAC.
○ Solid grasp of cloud networking: VPCs (definition, boundaries, virtualization via
SDN, differences from private clouds), bare metal vs. virtual machines
(advantages like resource efficiency, flexibility, and scalability).
○ Ability to work in air-gapped environments, preparing offline artifacts and
ensuring self-contained deployment
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
Job description:
Roles & Responsibilities
At Dolat, code is our business, so naturally, the Core Engineering and Systems team is at the centre of what we do. Our community of developers has designed and continues to enhance one of the fastest trading platforms using the latest tools and technologies. As a Software Developer, you’ll draw upon your computer science, mathematical, and analytical abilities to develop complex and nimble code used to grow our business and increase the efficiency of the global financial markets. Your responsibilities may include any of the following, which will require you to exercise discretion and independent judgment:
Augmenting, improving, redesigning, and/or re-implementing Dolat's low-latency/high throughput production trading environment, which collects data from and disseminates orders to exchanges around the world
Optimizing this platform by using network and systems programming, as well as other advanced techniques
Developing systems that provide easy access to historical market data and trading simulations
Building risk-management and performance-tracking tools
Shaping the future of Dolat through regular interviewing and infrequent campus recruiting trips
Implementing domain-optimized data structures
Learn and internalize the theories behind current trading system
Participate in the design, architecture and implementation of automated trading systems while taking ownership of system from design through implementation
Skills & Experience
A strong background in data structures, algorithms, and object-oriented programming in C++
Exchange Connectivity experience a plus
Familiarity with Linux environments; Windows a plus
High level knowledge & competencies in one or more of the following areas: TCP stack optimization Multi-core 1 machine parallelism Low level performance / cache optimization / profiling
Additional requirements include:
Experience in distributed and/or highly concurrent systems is a plus
Experience in low-latency systems and/or high transaction environments is a plus
A passion for new technologies and ideas
The ability to manage multiple tasks in a fast-paced environment
Experience in network topologies and protocols like TCP and UDP
Job Description for PostgreSQL Lead
Job Title: PostgreSQL Lead
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Our Customer Account Management team is vital in ensuring client satisfaction and loyalty.
Role Overview
As the PostgreSQL Lead, you will own the design, implementation, and operational excellence of PostgreSQL environments. You’ll lead technical decision-making, mentor the team, interface with customers, and drive key initiatives covering performance tuning, HA architectures, migrations, and cloud deployments.
Key Responsibilities
- Lead PostgreSQL production environments: architecture, stability, performance, and scalability
- Oversee complex troubleshooting, query optimization, and performance analysis
- Architect and maintain HA/DR systems (e.g., Streaming Replication, Patroni, repmgr)
- Define backup, recovery, replication, and failover protocols
- Guide DB migrations, patches, and upgrades across environments
- Collaborate with DevOps and cloud teams for infrastructure automation
- Use monitoring (pg_stat_statements, PMM, Nagios or any monitoring stack) to proactively resolve issues
- Provide technical mentorship—conduct peer reviews, upskill, and onboard junior DBAs
- Lead customer interactions: understand requirements, design solutions, and present proposals
- Drive process improvements and establish database best practices
Requirements
- Experience: 4-5 years in PostgreSQL administration, with at least 2+ years in a leadership role
- Performance Optimization: Expert in query tuning, indexing strategies, partitioning, and execution plan analysis.
- Extension Management: Proficient with critical PostgreSQL extensions including:
- pg_stat_statements – query performance tracking
- pg_partman – partition maintenance
- pg_repack – online table reorganization
- uuid-ossp – UUID generation
- pg_cron – native job scheduling
- auto_explain – capturing costly queries
- Backup & Recovery: Deep experience with pgBackRest, Barman, and implementing Point-in-Time Recovery (PITR).
- High Availability & Clustering: Proven expertise in configuring and managing HA environments using Patroni, repmgr, and streaming replication.
- Cloud Platforms: Strong operational knowledge of AWS RDS and Aurora PostgreSQL, including parameter tuning, snapshot management, and performance insights.
- Scripting & Automation: Skilled in Linux system administration, with advanced scripting capabilities in Bash and Python.
- Monitoring & Observability: Familiar with pg_stat_statements, PMM, Nagios, and building custom dashboards using Grafana and Prometheus.
- Leadership & Collaboration: Strong problem-solving skills, effective communication with stakeholders, and experience leading database reliability and automation initiatives.
Preferred Qualifications
- Bachelor’s/Master’s degree in CS, Engineering, or equivalent
- PostgreSQL certifications (e.g., EDB, AWS)
- Consulting/service delivery experience in managed services or support roles
- Experience in large-scale migrations and modernization projects
- Exposure to multi-cloud environments and DBaaS platforms
What We Offer:
- Competitive salary and benefits package.
- Opportunity to work with a dynamic and innovative team.
- Professional growth and development opportunities.
- Collaborative and inclusive work environment.
Job Details:
- Work time: General shift
- Working days: 5 Days
- Mode of Employment - Work From Home
- Experience - 4-5 years
Job Overview :
We are looking for an experienced PL/SQL Developer to join our Professional Services team. The role involves developing and configuring enterprise-grade solutions, supporting clients during testing, and collaborating with internal teams. Candidates with strong expertise in PL/SQL and Unix/Linux are preferred, along with exposure to cloud, DevOps, or financial domains.
Key Responsibilities
- Develop and configure software features as per design specifications and enterprise standards.
- Interact with clients to resolve technical queries and support User Acceptance Testing (UAT).
- Collaborate with internal R&D, Professional Services, and Customer Support teams.
- Occasionally work at client sites or across different time zones.
- Ensure secure, scalable, and high-quality code.
Must-Have Skills
- Strong hands-on experience in PL/SQL, SQL, and Databases (Oracle, MS-SQL, MySQL, Postgres, MongoDB).
- Proficiency in Unix/Linux commands and shell scripting.
Nice-to-Have Skills
- Basic understanding of DevOps tools (Jenkins, Artifactory, Docker, Kubernetes).
- Exposure to Cloud environments (AWS preferred).
- Awareness of Python programming.
- Experience in AML, Fraud, or Financial Markets domain.
- Knowledge of Actimize (AIS/RCM/UDM).
Education & Experience
- Bachelor’s degree in Computer Science, Engineering, or equivalent.
- 4–8 years of overall IT experience, with 4+ years in software development.
- Strong problem-solving, communication, and customer interaction skills.
- Ability to work independently in time-sensitive environments.
About the Role:
We’re looking for a Python Developer to build, optimize, and scale applications that power our trading systems. You’ll work on automation, server clusters, and high-performance infrastructure in a fast-paced, tech-driven environment.
What you’ll do:
- Build and test applications end-to-end.
- Automate workflows with scripts.
- Optimize system performance and reliability.
- Manage code versions and collaborate with peers.
- Work with clusters of 100+ servers.
What we’re looking for:
- Strong Python fundamentals (OOP, data structures, algorithms).
- Experience with Linux commands, Bash scripting.
- Basics of Numpy, Matplotlib, and PostgreSQL.
- Hands-on with automation and scripting tools.
- Problem solver with a focus on scalability & optimization.
Why Dolat?
- Work at the intersection of finance & tech.
- Opportunity to solve complex engineering problems.
- Learn and grow with a team of smart, collaborative engineers.
Key Skills & Experience:
🔹 #SQL, #OraclePLSQL, #Unix / Perl / Python
🔹 #OracleRDBMS (latest) | #PostgreSQL plus
🔹 DB Objects | Performance Tuning | Analytics
🔹 #UnixLinux, Agile (Scrum, JIRA, Confluence, BitBucket)
🔹 DevOps (UC4, Maven, Jenkins, CI/CD)
🔹 ETL, Data Modeling, Informatica
🔹 Java, Angular/React, Spring Boot, XML, JSON, API Gateway
🔹 #OracleAPEX advantage
We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)
Job Opening: FreeSWITCH Developer
Location: Ecospace Business Park, Rajarhat, Newtown, Kolkata
Employment Type: Full-Time, Permanent
Shift: Day Shift
Experience: Minimum 4+ years in FreeSWITCH/VoIP Development
Responsibilities:
- Design, develop, deploy, troubleshoot, and maintain tools and services supporting our cloud telephony network.
- Customize FreeSWITCH for large-scale audio/video conferencing (1000–1500 concurrent calls).
- Expertise in SIP, RTP, RTCP, TURN, STUN, NAT, TLS.
- Hands-on experience with RTP Proxy and routed audio conferences.
- Understanding of SDP Protocol offer/answer mechanism.
- Work with load testing tools for FreeSWITCH audio conferences.
- Deploy and manage multiple FreeSWITCH instances using load balancers.
- Debug issues using packet captures (Wireshark/Ngrep).
- Collaborate with mobile/API teams for integration and support.
- Knowledge of codecs (PCMU, PCMA, G729, Opus) and open-source telephony technologies (FreeSWITCH, WebRTC).
- Familiarity with SIP servers (SER/OpenSER), proxy servers, SBC, SIPX is a plus.
Qualifications:
- Bachelor’s degree in Engineering (B.Tech/MCA).
- 4+ years of hands-on experience in FreeSWITCH development or related VoIP technologies.
- Strong knowledge of Linux environments and command-line tools.
- Basic to intermediate SQL knowledge.
- Proficiency in scripting (Bash, Python, Perl) for automation.
- Strong understanding of VoIP protocols (SIP, RTP), networking principles (TCP/IP, DNS, DHCP, routing protocols).
- Excellent troubleshooting skills with ability to resolve complex technical issues.
- Strong problem-solving, communication, and collaboration skills.
Industry Preference:
- Travel (Preferred)
- Any Industry (Required)
Job Summary:
We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.
Key Responsibilities:
Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.
Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.
Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.
Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.
Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.
Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.
Required Qualifications:
5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.
Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.
Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.
Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.
Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.
Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.
Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.
Experience implementing and maintaining CI/CD pipelines with industry-standard tools.
Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).
Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.
Excellent problem-solving skills, attention to detail, and a proactive mindset.
Strong communication skills and the ability to collaborate with diverse technical teams.
Preferred Qualifications (Nice to Have):
Experience with other Python frameworks (FastAPI, Django)
Knowledge of container orchestration tools like Kubernetes
Familiarity with monitoring tools like Prometheus, Grafana, or Datadog
Prior experience working in an Agile/Scrum environment
Contributions to open-source projects or technical blogs
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQLqueries for MSSQL and PostgresQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Job Summary
We are seeking a highly skilled and motivated Linux Device Driver Engineer with strong C/C++ programming skills and hands-on experience in Linux driver development. The ideal candidate will have a proven track record of working with kernel modules and hardware interfaces, and be comfortable debugging and optimizing low-level system software.
Key Responsibilities
- Porting existing Linux device drivers to new platforms, SoCs, and kernel versions.
- New driver development for custom hardware components and peripherals.
- Debugging kernel and driver-level issues using industry-standard tools.
- Integration & bring-up of hardware with Linux-based systems.
- Collaborate with hardware teams to interpret specifications and enable device functionality.
- Optimize drivers for performance, reliability, and resource efficiency.
- Write clear technical documentation for driver APIs, design, and integration steps.
Required Skills & Qualifications
- Bachelor’s/Master’s in Computer Science, Electronics, or related field.
- 4 to 8 years of professional experience in software development.
- Strong proficiency in C/C++ programming and memory management.
- Hands-on experience with any Linux device driver (character, block, network, USB, PCIe, I2C, SPI, etc.).
- Good understanding of Linux kernel architecture, module programming, and build systems.
- Knowledge of interrupt handling, DMA, and device tree configuration.
- Familiarity with cross-compilation and embedded Linux toolchains.
- Experience with debugging tools (GDB, ftrace, perf, printk, etc.).
- Version control experience (Git).
Preferred Skills
- Exposure to multiple driver types (networking, storage, multimedia, etc.).
- Experience with Yocto, Buildroot, or similar embedded Linux environments.
- Knowledge of real-time Linux and RT patches.
- Scripting knowledge (Python, Bash) for testing and automation.
Soft Skills
- Strong analytical and debugging skills.
- Good communication and collaboration abilities.
- Ability to work independently and take ownership of deliverables.
Role Description
This is a full-time on-site role for a Cloud & DevOps Engineer located in Coimbatore. The Cloud & DevOps Engineer will be responsible for managing and optimizing cloud infrastructure, implementing continuous integration and deployment processes, and ensuring the smooth operation of services. The role will involve working with Kubernetes for container orchestration and managing Linux environments.
Qualifications
- Proficiency in Software Development skills
- Experience with Continuous Integration and Deployment processes
- Skills in Kubernetes management and orchestration
- Strong understanding of Linux operating systems
- Ability to work collaboratively in a team environment
- Relevant certifications in cloud and DevOps technologies are a plus
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Pace Robotics is a construction robotics startup incubated by SINE IIT-Bombay and backed by Pidilite Ventures and IIMA Ventures. We are making the construction process faster, smarter and efficient through robots that can do advanced construction tasks and digitize execution
data in real time. Our first product is a modular wall finishing robot for plastering, putty and painting of building interiors. Our customers include some of the biggest names in Indian real estate and construction.To support our next stage of growth and to take the product from prototype to manufacturing ready design, we are looking for an embedded systems lead who will lead design and implementation of the software for embedded devices and systems end to end from requirements gathering to commercial deployment.
What you will do:
● Conceptualize, design, fabricate, integrate and test overall embedded architecture, embedded boards, software and circuitry for Pace Robotics’ fleet of AMRs.
● Develop embedded code which enables simultaneous exchange of data between various modules and sensors to facilitate fast/robust decision making for the robot and real time controls.
● Selection of Communication Protocols, actuation sequencing and deep involvement with embedded software and control strategy.
● Develop and maintain various engineering tools used to debug, analyze, and test embedded products.
● Work as the common link between the hardware, robot autonomy, and software teams, converting inputs from each team along with overall product/customer needs intoelectronic circuits, software and hardware.
● Selection and integration of all active, passive components, micro-controller, processor w/peripherals and power components.
● Work on improving the efficiency of the power distribution and consumption.
● Work on designing the wiring scheme of the whole autonomous system.
Who you are:
● Masters or Bachelors (with 8-12 yrs exp) in Electronics & Communication / Electronics & Instrumentation/Electrical & Electronics/Embedded Systems with relevant experience in industrial R&D Projects.
● You have the following technical skill sets:
o Expertise with latest Gen Cortex A and M series microcontroller/processors, Nvidia Orin/Jetson platforms
o Expertise with C,C++ and python programming
o Expertise with digital electronics (BJTs, Mosfets, Amplifiers, OP Amps etc)
o Knowledge of different communication protocols such as I2C,Ethernet(TCP/IP,UDP), EtherCat, CAN etc
o Good Understanding of Electronics Circuits and Schematics
o Expertise with RTOS (FreeRTOS/VxWorks etc)
o Expertise with Linux (Ubuntu and RTLinux) programming concepts
o Excellent understanding of Signal Processing methods
o Experienced in PCB and wiring harness Design
o Proficient with Matlab, Simulink, DAQ software
● Good to have:-
o Knowledge of robotics concepts and fundamentals such as Mapping, localization, path planning and perception
o Knowledge of Computer Vision, ML and DL
o Knowledge of control systems
o Knowledge of kinematics, dynamics
● Strong quantitative and data interpretation skills
o Highly organized and methodical in everything you do
o You love solving problems and looks at them as an opportunity to grow
o You keep up to date with latest developments and research
o Highly curious, love to learn, a strong team player and a motivation for junior colleagues
What's in it for you:
Globally, Construction provides a 1.6 Trillion Dollar opportunity, just by making the work more efficient. This is the market we are after. You will be an integral part of the core team of an early stage startup on a mission to change the way we build our world and disrupt one of the oldest and the largest industries. You will be designing products that can change the landscape of construction, become ubiquitous in project sites and be synonymous with construction of the future (Think of JCB and Tower Cranes of today) We promise you an environment that promotes unhindered innovation and one that can provide exciting opportunities for accelerated growth. You will be working with a stellar
engineering team having diverse experience in developing robots for automotive, agriculture and construction industries. You will have immense learning opportunity as you will be working on ‘never-seen-before’ products for an industry that’s ripe for technological disruptions.
The compensation includes company stocks through ESOP’s.
Required skills / knowledge
• Experienced in software development and proficient in one or more of the following
programming languages - Python, JAVA and/or Perl.
• Experience with SDLC (software development lifecycle) best practices with large software
development projects is a must.
• Good knowledge in Computer Science fundamental
• Good organizational and English communication skills, prioritization of multiple projects and
objective.
Preferred skills / knowledge:
• Knowledge of storage technologies and disciplines namely, Fibre Channel, ISCSI, SCSI, NFS, CIFS,
POSIX, Object Storage, SAS, SATA, FLASH (NVME, SSD, etc.), RAID, Erasure Coding,
Distributed/Scale-out Storage, File systems, High-availability methods, working knowledge of
databases etc. are strongly preferred. Understanding networking protocols and connectivity
preferred.
• System Architecture experience within enterprise UNIX (RHEL)/Windows Server environments,
Container environments (RH Open Shift/Kubernetes/Docker, etc.) and Cloud environments
(Azure/AWS/Google) is highly desirable but not required.
• Understanding of Client/Server, Scaleout architectures, virtualization, performance and capacity
management strongly preferred.
• Good knowledge and experience of using Linux provisioning and system configuration
management tools such as Ansible, Puppet, Salt Stack, or Chef strongly preferred. Experience in
automation of a large-scale Linux deployment is preferred.
• Effective troubleshooting skills across O/S, network and storage.
• Knowledge in the following vendor products is preferred but not required: NetApp
7mode/cDOT/Engenio, IBM Spectrum Scale (aka GPFS), HDS Storage Arrays/HCP, EMC Atmos,
Brocade SAN/FOS, Veritas Storage Foundation Suite
Job Title: Cybersecurity Agent Developer
Location: Bengaluru, India
Experience: 7+ Years
Employment Type: Full-time
About the Role:
We are seeking a highly skilled Cybersecurity Agent Developer with deep expertise in C/C++ and Golang or Rust to build and optimize high-performance security agents for Windows, Linux, and macOS platforms. This role requires a strong background in low-level system programming, performance tuning, and security-centric design to ensure effective monitoring, threat detection, and system protection across diverse environments.
Key Responsibilities:
- Design, develop, and maintain cross-platform endpoint security agents.
- Optimize agent performance to ensure minimal system overhead and real-time responsiveness.
- Implement system-level hooks and monitoring components including:
- Process monitoring
- File system and network activity tracking
- System telemetry collection
- Work with kernel-level APIs and frameworks, such as:
- ETW, WFP, WMI, MiniFilter (Windows)
- eBPF, auditd, fanotify, netfilter (Linux)
- EndpointSecurity framework, XPC, System Extensions (macOS)
- Build robust, secure inter-process communication (IPC) and data serialization mechanisms.
- Integrate agents with cloud-based security platforms via REST APIs, gRPC, and TLS.
- Collaborate with internal teams (threat intelligence, detection, response) to evolve agent capabilities.
- Perform in-depth debugging, profiling, and optimization using industry-standard tools.
Required Skills & Experience:
Core Programming:
- Strong proficiency in C/C++ and either Golang or Rust
- Solid experience in multi-threaded and asynchronous programming
Platform Expertise:
- Proven experience developing for Windows, Linux, and macOS
- Deep knowledge of system-level programming, including:
- Windows: WinAPI, ETW, WFP, WMI, MiniFilter
- Linux: eBPF, auditd, fanotify, netfilter
- macOS: EndpointSecurity framework, XPC, System Extensions
Security & Networking:
- Understanding of secure IPC, TLS, gRPC, and secure coding practices
- Familiarity with system hardening and secure memory management
Debugging & Optimization Tools:
- Proficient in using tools like GDB, LLDB, Valgrind, Perf, Wireshark, Sysinternals Suite
Version Control:
- Strong experience with Git (GitHub, GitLab)
Preferred Qualifications:
- Experience with cybersecurity frameworks like MITRE ATT&CK, Sysmon, YARA, Suricata
- Hands-on exposure to kernel/driver development
- Familiarity with EDR/XDR, sandboxing, and SIEM integrations
- Understanding of malware analysis and threat detection techniques
- Exposure to container security and cloud-native security agent development
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

The company has a diverse portfolio of products which includ
Industry – Manufacturing
10+ years of overall experience in handling IT in manufacturing companies.
Qualification: Bachelors in IT or relevant
Job Summary:
1. Exposure to Manage IT functions involving Infrastructure and Application.
2. SAP/ Business One Administration Handling, Managing, Maintain and Controlling the entire SAP Platform including Enhancements.
3. Business Continuity and Disaster Recovery.
4. Providing Training and Support to Internal Team and End users.
Mandatory Skills:
• SAP B1 Administration, Configuration and Support experience.
• Experience on Linux OS and HANA Database administration.
• Experience in Business Process automation experience.
• Proven working experience in installing, configuring, upgrading and troubleshooting SAP B1,
Linux /windows Operating Systems and HANA Database platforms.
• Knowledge on Monitoring and Maintaining performance according to requirements and upgrading the systems with new release models as per changing requirements.
• Knowledge of interfacing SAP B1 with 3rd Party Solutions.
• Knowledge on security, backup, redundancy and Disaster Recovery strategies.
• Understanding of License Compliance, Governance, Risk Management and Statutory Audit requirements.
· Knowledge about Information Security Management System (ISMS ISO27001:2022)
· Knowledge about Cloud - Migrations, Implementation, Management and Maintenance
Additional Skills:
• Knowledge on Active Directory, DNS, Group Policies creation, DHCP servers designing and planning.
• Knowledge of IT system security, anti virus (mainly TrendMicro), and firewall devices.
• Being Self-motivated – driven to achieve results & flexible to work anytime and weekends, as may be needed
• Excellent verbal and written communication skills.
• Strong analytical and problem-solving skills.
• Strong organizational skills and adaptive capacity for rapidly changing priorities and workloads
• Exposure to AWS, Azure, and GCP.
Responsibilities :
1)Support to all SAP B1 users as first level problems solving and inquiries. Coordinates
communication and the resolution of complex issues.
2) Manage day-to-day SAP B1 operations and server infrastructure supporting a multi-site End
Users environment.
3) Manages projects including planning and execution of enhancements.
4) Monitoring and managing user access, permissions, and security protocols to maintain system integrity and performing routine maintenance on servers and network infrastructure, including patch updates and performance tuning.
5) Implementing and monitoring backup, restore, and redundancy strategies to ensure data security and availability.
6) Troubleshooting and resolving hardware and software issues across various systems promptly to minimize downtime.
7) Configuring and maintaining virtualization platforms, ensuring high availability and optimized performance.
8) Conduct Internal and Statutory IT Audits














