50+ Linux/Unix Jobs in India
Apply to 50+ Linux/Unix Jobs on CutShort.io. Find your next job, effortlessly. Browse Linux/Unix Jobs and apply today!
Qualifications & Requirements:
- 4+ years of experience in C++ application development.
- Hands-on experience with C++11 or above.
- Strong knowledge of object-oriented programming and software design.
- Deep understanding of STL, multi-threading, socket programming, and data structures.
- Solid grasp of Linux development and debugging techniques.
- Proficient in using GCC, GDB, and Makefile.
- Familiarity with Valgrind and similar analysis tools.
- Experience with version control tools like Git.
- Experience writing and maintaining automated tests.
- Experience in capital markets/trading domain is a plus.
Skills:
- Strong problem-solving and analytical thinking.
- Clear and effective communication.
- Self-driven with the ability to work independently.
- Passionate about high-quality software and strong engineering practices.
- Comfortable working in a fast-paced, collaborative environment.
Education : B.Tech / M.Tech only
MANDATORY CRITERIA:
- It's a Contractual role for a particular project ongoing in the company, duration will be 1 year or may exceed as per the project requirement, After that the candidate will be kept on permanent role.
- The candidate will be on company's payroll only.
- Candidate should be comfortable for directly visiting and working with client's place.
- Immediate to 15 days joiner preferred
- 3 to 5 years of hands-on experience in Linux Device Driver development
- Strong experience with Linux kernel programming & memory management
- Experience with Zephyr OS / device driver model (porting bare-metal drivers).
- Familiarity with RTOS linux kernel internals and hardware protocols (They mainly use AXI protocol, i2c, spi).
- Strong knowledge of PCIe and DMA drivers
- Proficiency in C / C++ programming languages
- Experience working with hardware interfaces/protocols (AXI, I2C, SPI)
REQUIRED SKILLS:
- Proven experience in developing Linux Device Drivers.
- Preferred experience in Zephyr (need to port bare metal drivers to zephyr OS/device driver model).
- Strong knowledge of PCIe and DMA drivers (Good to know Xilinx's IPs like AXI-DMA, XDMA etc).
- Expertise in Linux Memory Management.
- Proficiency in C/C++ programming languages.
- Preferred familiarity with real-time operating systems (RTOS), linux kernel internals and hardware protocols (They mainly use AXI protocol, i2c, spi).
Immediate to 15 days joiners are preferrable we need to close it asap.
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
Experience - 10-20 Yrs
Job Location - CommerZone, Yerwada, Pune
Work Mode - Work from Office
Shifts - General Shift
Work days - 5 days
Quantification - Graduation full time mandatory
Domain - Payment/Card/Banking/BFSI/ Retail Payments
Job Type - Full Time
Notice period - Immediate or 30 days
Interview Process -
1) Screening
2) Virtual L1 interview
3) Managerial Round Face to Face at Pune Office
4) HR Discussion
Job Description
Job Summary:
The Production/L2 Application Support Manager will be responsible for managing the banking applications that supports our payment gateway systems in a production environment. You will oversee the deployment, monitoring, optimization, and maintenance of all application components. You will ensure that our systems run smoothly, meet business and regulatory requirements, and provide high availability for our customers.
Key Responsibilities:
- Manage and optimize the application for the payment gateway systems to ensure high availability, reliability, and scalability.
- Oversee the day-to-day operations of production environments, including managing cloud services (AWS), load balancing, database systems, and monitoring tools.
- Lead a team of application support engineers and administrators, providing technical guidance and support to ensure applications and infrastructure solutions are implemented efficiently and effectively.
- Collaborate with development, security, and product teams to ensure application support the needs of the business and complies with relevant regulations.
- Monitor application performance and system health using monitoring tools and ensure quick resolution of any performance bottlenecks or system failures.
- Develop and maintain capacity planning, monitoring, and backup strategies to ensure scalability and minimal downtime during peak transaction periods.
- Drive continuous improvement of processes and tools for efficient production/application management.
- Ensure robust security practices are in place across production systems, including compliance with industry standards
- Conduct incident response, root cause analysis, and post-mortem analysis to prevent recurring issues and improve system performance.
- Oversee regular patching, updates, and version control of production systems to minimize vulnerabilities.
- Develop and maintain application support documentation, including architecture diagrams, processes, and disaster recovery plans.
- Manage and execute on-call duties, ensuring timely resolution of application-related issues and ensuring proper support coverage.
Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).
- 8+ years of experience managing L2 application support in high-availability, mission-critical environments, ideally within a payment gateway or fintech organization.
- Experience with working L2 production support base on Java programming.
- Experience with database systems (SQL, NoSQL) and database management, including high availability and disaster recovery strategies.
- Excellent communication and leadership skills, with the ability to collaborate effectively across teams and drive initiatives forward..
- Ability to work well under pressure and in high-stakes situations, ensuring uptime and service continuity.
About Sun King
Sun King is the world’s leading off-grid solar energy company, delivering energy access to 1.8 billion people without reliable grid connections through innovative product design, fintech solutions, and field operations.
Key highlights:
- Connected over 20 million homes to solar power across Africa and Asia, adding 200,000 homes monthly.
- Affordable ‘pay-as-you-go’ financing model; after 1-2 years, customers own their solar equipment.
- Saved customers over $4 billion to date.
- Collect 650,000 daily payments via 28,000 field agents using mobile money systems.
- Products range from home lighting to high-energy appliances, with expansion into clean cooking, electric mobility, and entertainment.
With 2,800 staff across 12 countries, our team includes experts in various fields, all passionate about serving off-grid communities.
Diversity Commitment:
44% of our workforce are women, reflecting our commitment to gender diversity.
About the role:
Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments.
What you would be expected to do:
- Work with engineering, automation, and data teams to work on various infrastructure requirements.
- Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform.
- Managing AWS services for multiple teams.
- Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services.
- Deployment and management of Kubernetes resources.
- Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution.
- Set up incident response services and design effective processes.
- Deployment and management of critical platform services like OPA and Keycloak for IAM.
- Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines.
You might be a strong candidate if you have/are:
- Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks.
- Experience working with web servers (nginx, apache) and cloud providers (preferably AWS).
- Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments.
- Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters).
- Knowledge of web architecture, distributed systems, and single points of failure.
- Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck.
- Good networking fundamentals — SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls.
Good to have:
- Experience with backend development and setting up databases and performance tuning using parameter groups.
- Working experience in Kubernetes cluster administration and Kubernetes deployments.
- Experience working alongside sec ops engineers.
- Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing.
- Setup and usage of open telemetry, central logging, and monitoring systems.
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry.
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
We are looking for a skilled and motivated Integration Engineer to join our dynamic team in the payment domain. This role involves the seamless integration of payment systems, APIs, and third-party services into our platform, ensuring smooth and secure payment processing. The ideal candidate will bring experience with payment technologies, integration methodologies, and a strong grasp of industry standards.
Key Responsibilities:
- System Integration:
- Design, develop, and maintain integrations between various payment processors, gateways, and internal platforms using RESTful APIs, SOAP, and related technologies.
- Payment Gateway Integration:
- Integrate third-party payment solutions such as Visa, MasterCard, PayPal, Stripe, and others into the platform.
- Troubleshooting & Support:
- Identify and resolve integration issues including transactional failures, connectivity issues, and third-party service disruptions.
- Testing & Validation:
- Conduct end-to-end integration testing to ensure payment system functionality across development, staging, and production environments.
Qualifications:
- Education:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. Equivalent work experience is also acceptable.
- Experience:
- 3+ years of hands-on experience in integrating payment systems and third-party services.
- Proven experience with payment gateways (e.g., Stripe, Square, PayPal, Adyen) and protocols (e.g., ISO 20022, EMV).
- Familiarity with payment processing systems and industry standards.
Desirable Skills:
- Strong understanding of API security, OAuth, and tokenization practices.
- Experience with PCI-DSS compliance.
- Excellent problem-solving and debugging skills.
- Effective communication and cross-functional collaboration capabilities.
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Job Title : Java Backend Developer
Experience : 3 – 8 Years
Location : Pune (Onsite) (Pune candidates Only)
Notice Period : Immediate to 15 Days (or serving NP whose LWD is near)
About the Role :
We are seeking an experienced Java Backend Developer with strong hands-on skills in backend microservices development, API design, cloud platforms, observability, and CI/CD.
The ideal candidate will contribute to building scalable, secure, and reliable applications while working closely with cross-functional teams.
Mandatory Skills : Java 8 / Java 17, Spring Boot 3.x, REST APIs, Hibernate / JPA, MySQL, MongoDB, Prometheus / Grafana / Spring Actuators, AWS, Docker, Jenkins / GitHub Actions, GitHub, Windows 7 / Linux.
Key Responsibilities :
- Design, develop, and maintain backend microservices and REST APIs
- Implement data persistence using relational and NoSQL databases
- Ensure performance, scalability, and security of backend systems
- Integrate observability and monitoring tools for production environments
- Work within CI/CD pipelines and containerized deployments
- Collaborate with DevOps, QA, and product teams for feature delivery
- Troubleshoot, optimize, and improve existing modules and services
Mandatory Skills :
- Languages & Frameworks : Java 8, Java 17, Spring Boot 3.x, REST APIs, Hibernate, JPA
- Databases : MySQL, MongoDB
- Observability : Prometheus, Grafana, Spring Actuators
- Cloud Technologies : AWS
- Containerization Tools : Docker
- CI/CD Tools : Jenkins, GitHub Actions
- Version Control : GitHub
- Operating Systems : Windows 7, Linux
Nice to Have :
- Strong analytical and debugging abilities
- Experience working in Agile/Scrum environments
- Good communication and collaborative skills
We are seeking an experienced and highly skilled Java (Fullstack) Engineer to join our team.
The ideal candidate will have a strong background in both Back-end JAVA, Spring-boot, Spring Framework & Frontend Javascript, React or Angular with ability to salable high performance applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services Develop, test & deploy scalable & robust back-end services using JAVA & Spring-boot
- Build responsive & user friendly front-end applications using modern Java-script framework with React
- or Angular
- Collaborate with architects & team members to design salable, maintainable & efficient systems.
- Contribute to architectural decisions for micro-services, API’s & cloud solutions.
- Implement & maintain Restful API for seamless integration.
- Write clean, efficient & res-usable code adhering to best practices
- Conduct code reviews, performance optimizations & debugging
- Work with cross functional teams, including UX/UI designers, product managers & QA team.
- Mentor junior developers & provide technical guidance.
Skills & Requirements
- Minimum 5 Years of experience in backend/ fullstack development
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Front-end - JavaScript, HTML, CSS, Typescript, Angular
- Database - MySQL
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
- Experience working in fintech, payments, or high-scale production environments
About Hudson Data
At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.
Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.
Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.
⸻
About the Role
We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.
⸻
Key Responsibilities
1. Data Collection & Management
• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.
• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.
• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.
• Collaborate with engineering teams to define data governance, lineage, and security standards.
2. Data Cleaning & Preprocessing
• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.
• Handle missing data, duplicates, and outliers while ensuring consistency and quality.
• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.
3. Data Analysis & Insights
• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.
• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.
• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.
• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.
4. Predictive Modeling & Machine Learning
• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.
• Perform feature engineering, model tuning, and cross-validation for performance optimization.
• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.
• Continuously monitor, evaluate, and retrain models to ensure business relevance.
5. Reporting & Visualization
• Develop interactive dashboards and automated reports for performance tracking.
• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.
• Communicate insights effectively through clear data storytelling.
6. Collaboration & Communication
• Partner with business, engineering, and product teams to define analytical goals and success metrics.
• Translate complex data and model results into actionable insights for decision-makers.
• Advocate for data-driven culture and support data literacy across teams.
7. Continuous Improvement & Innovation
• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.
• Identify opportunities for process optimization, automation, and innovation.
• Contribute to internal R&D and AI product development initiatives.
⸻
Required Skills & Qualifications
Technical Skills
• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).
• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.
• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.
• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).
• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).
• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.
Soft Skills
• Strong analytical, problem-solving, and critical-thinking abilities.
• Excellent communication and presentation skills, including data storytelling.
• Curiosity and creativity in exploring and interpreting data.
• Collaborative mindset, capable of working in cross-functional and fast-paced environments.
⸻
Education & Certifications
• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
• Relevant certifications are highly valued:
• Google Cloud Professional Data Engineer
• AWS Certified Data Analytics – Specialty
• Microsoft Certified: Azure Data Scientist Associate
• TensorFlow Developer Certificate
⸻
Why Join Hudson Data
At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in computer science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools.
Why Join Us?
• Be part of a team supporting mission-critical systems in real-time.
• Work in a high-energy, tech-driven environment.
• Opportunities to grow into domain/tech leadership roles.
• Competitive salary and benefits, health coverage, and employee wellness programs.
JOB DETAILS:
- Job Title: Senior Devops Engineer 1
- Industry: Ride-hailing
- Experience: 4-6 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)
Criteria:
1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.
Description
Job Summary:
As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.
Job Responsibilities:
- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.
- Understanding the needs of stakeholders and conveying this to developers.
- Working on ways to automate and improve development and release processes.
- Identifying technical problems and developing software updates and ‘fixes’.
- Working with software developers to ensure that development follows established processes and works as intended.
- Do what it takes to keep the uptime above 99.99%.
- Understand DevOps philosophy and evangelize the principles across the organization.
- Strong communication and collaboration skills to break down the silos
Job Requirements:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.
- Strong background in operating systems like Linux.
- Understands the container orchestration tool Kubernetes.
- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
- Problem-solving attitude, and ability to write scripts using any scripting language.
- Understanding programming languages like GO/Python, and Java.
- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.
- Should be able to take ownership of tasks, and must be responsible. - Good communication skills
Responsibilities: 1. Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models 2. Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes 3. Automate the training, testing and deployment processes for machine learning models 4. Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability 5. Implement best practices for version control, model reproducibility and governance 6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness 7. Troubleshoot and resolve issues related to model deployment and performance 8. Ensure compliance with security and data privacy standards in all MLOps activities 9. Keep up to date with the latest MLOps tools, technologies and trends 10. Provide support and guidance to other team members on MLOps practices
Required skills and experience: • 3-10 years of experience in MLOps, DevOps or a related field • Bachelor’s degree in computer science, Data Science or a related field • Strong understanding of machine learning principles and model lifecycle management • Experience in Jenkins pipeline development • Experience in automation scripting
Senior C++/Qt Backend Engineer
(High-Performance Systems)
Location: Noida (On-site)
Introduction: Who We Are
We are a lean, product-based startup building the next generation of industrial robotics. Our products are deployed in critical, high-stakes environments, including Railways, Oil & Gas, Chemicals & Fertilizers, and Offshore operations.
We are not just writing code; we are building rugged, intelligent machines that operate in the real world.
1. The Mission (Pure Backend Focus)
You will architect the high-performance C++ Backend of our robotics software.
- No UI Work: You will NOT be designing UI pixels or writing QML front-end code.
- The Engine: Your mission is to build the “invisible engine” that processes 50 Mbps of raw scientific data and feeds it efficiently to the UI layer.
- Ownership: You own the threads, the data structures, and the logic.
2. Critical Outcomes (The First 4 Months)
- Architect the Data Ingestion Layer:
- Design a C++ backend capable of ingesting 50 Mbps of live sensor data (from embedded hardware) without dropping packets or consuming excessive CPU.
- Decouple Backend from UI:
- Implement Ring Buffers and Lock-Free Queues to separate high-speed data acquisition threads from the main Qt Event Loop, ensuring the backend never freezes the UI.
- Crash-Proof Concurrency:
- Refactor the threading model to eliminate Race Conditions and Deadlocks using proper synchronization (Mutexes/Semaphores) or lock-free designs.
- Efficient IPC Implementation:
- Establish robust Inter-Process Communication (Shared Memory / Sockets) to allow the C++ backend to exchange data with other Linux processes instantly.
3. Strategic Outcomes (Months 5 Onward)
As the product matures, your focus will shift from “Building the Engine” to “Hardening and Scaling the Ecosystem.”
- Robust OTA & Redundancy:
- Implement Linux A/B Partitioning strategies. You will design the fallback mechanism where the system uses atomic updates to revert to the last known good configuration in case of an update failure, ensuring high availability in remote offshore locations.
- Containerized Deployment:
- Move from manual builds to automated deployment. You will containerize the application (Docker / Podman) and integrate it with Jenkins / GitLab CI to enable seamless remote deployment to the robot fleet.
- Remote Diagnostics Engine:
- Build the internal logic to capture, compress, and transmit critical system logs and core dumps securely to the cloud without saturating the robot’s bandwidth.
- Fleet Monitoring Infrastructure:
- Distinct from simple logging, you will architect the heartbeat and telemetry protocols that allow our central command to monitor the health of robots deployed in railways and chemical plants in real time.
4. Competencies (Must-Haves)
- Qt Core (Backend Only):
- Expert in QObject, QThread, QEventLoop, and Signal/Slot mechanisms. You understand how to push data to QML, but you don’t style it.
- High-Performance C++:
- You handle data at the byte level, preferring Circular Buffers (Ring Buffers) over standard vectors for streams.
- Concurrency Mastery:
- You know when to use Lock-Free programming to avoid thread contention and can manage interactions between Data Acquisition and Processing threads without bottlenecks.
- Design Patterns:
- Competence in Producer-Consumer (for streams), Singleton (hardware managers), and Factory patterns.
- Linux System Programming:
- Comfortable with IPC (Shared Memory, Unix Domain Sockets) and optimizing process priorities.
5. The “Squad” (Your Team)
- Embedded Engineers:
- They push the raw 50 Mbps stream to the OS; you write the drivers to catch it.
- UI / Frontend Developers:
- They handle QML / UX; you provide the data APIs they need.
- Robotics (ROS) Engineers:
- You ensure their heavy algorithms don’t starve your data acquisition threads.
- Testers:
- You ensure your code stands up to their stress testing.
6. Why This Role Defines Your Career
- Deep Backend Engineering:
- Escape the “button styling” trap. This is 100% logic, memory management, and architecture.
- Real Engineering Problems:
- Solve race conditions, memory leaks, and high-velocity data streams.
- Architectural Autonomy:
- You decide how the data moves and choose the patterns. You own the “Engine Room.”
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
Job Summary
We are looking for a Marketing Data Engineering Specialist who can manage our real-estate
lead delivery pipelines, integrate APIs, automate data workflows, and support performance
marketing with accurate insights. The ideal candidate understands marketing funnels and has
strong skills in API integrations, data analysis, automation, and server deployments.
Key Responsibilities
Manage inbound/outbound lead flows through APIs, webhooks, and sheet-based
integrations.
Clean, validate, and automate datasets using Python, Excel, and ETL workflows.
Analyse lead feedback (RNR, NT, QL, SV, Booking) and generate actionable insights.
Build and maintain automated reporting dashboards.
Deploy Python scripts/notebooks on Linux servers and monitor cron jobs/logs.
Work closely with marketing, client servicing, and data teams to improve lead quality
and campaign performance.
Required Skills
Python (Pandas, API requests), Advanced Excel, SQL
REST APIs, JSON, authentication handling
Linux server deployment (cron, logs)
Data visualization tools (Excel, Google Looker Studio preferred)
Strong understanding of performance marketing metrics and funnels
Qualifications
Bachelor’s degree in Engineering/CS/Maths/Statistics/Marketing Analytics or related
field.
Minimum 3 years of experience in marketing analytics, data engineering, or
marketing operations.
Preferred Traits
Detail-oriented, analytical, strong problem-solver
Ability to work in fast-paced environments
Good communication and documentation skills
Description
We are seeking a skilled and detail-oriented Member of Technical Staff focusing on Network Infrastructure, Linux Administration and Automation. The role involves managing and maintaining Linux-based systems and infrastructure, automating repetitive tasks, and ensuring smooth operation.
Requirements
- In-depth experience with Linux systems (configuration, troubleshooting, networking, and administration)
- Network infrastructure management knowledge. CCNA/CCNP or an equivalent certification is a plus
- Scripting skills in at least one language (e.g., Bash, Python, Go).
- Knowledge of version control systems like Git and experience with branching, merging, and tagging workflows
- Experience with virtualization technologies such as Proxmox or VMWare, including the design, implementation, and management of virtualized infrastructures. Understanding of virtual machine provisioning, resource management, and performance optimization in virtual environments.
- Experience with containerization technologies like Docker
- Familiarity with monitoring and logging tools.
- Experience with end point security.
Responsibilities
- Network Infrastructure Management: Configure, manage, and troubleshoot routers, switches, firewalls, and wireless networks, Maintain and optimize network performance to ensure reliability and security.
- Linux Administration: Manage and maintain Linux-based systems, ensuring high availability and performance.
- Infrastructure Management: Managing servers, networks, storage, and other infrastructure components, capacity planning, and disaster recovery.
- Automation: Scripting (Bash, Python, Golang, etc.), configuration management (Ansible, Puppet, Chef).
- Virtualization: Design, implement, and manage virtualized environments, ensuring optimal performance and resource efficiency.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide.
Hiring DevOps Engineers (Freelance)
We’re hiring for our client: Biz-Tech Analytics
Role: DevOps Engineer (Freelance)
Experience: 4-7+ years
Project: Terminus Project
Location: Remote
Engagement Type: Freelance | Project-based
About the Role:
Biz-Tech Analytics is looking for experienced DevOps Engineers to contribute to the Terminus Project, a hands-on initiative involving system-level problem solving, automation, and containerised environments.
This role is ideal for engineers who enjoy working close to the system layer, debugging complex issues, and building reliable automation in isolated environments.
Key Responsibilities:
• Work on Linux-based systems, handling process management, file systems, and system utilities
• Write clean, testable Python code for automation and verification
• Build, configure, and manage Docker-based environments for testing and deployment
• Troubleshoot and debug complex system and software issues
• Collaborate using Git and GitHub workflows, including pull requests and branching
• Execute tasks independently and iterate based on structured feedback
Required Skills & Qualifications:
• Expert-level proficiency with Linux CLI, including Bash scripting
• Strong Python programming skills for automation and tooling
• Hands-on experience with Docker and containerized environments
• Excellent problem-solving and debugging skills
• Proficiency with Git and standard GitHub workflows
Preferred Qualifications:
• Professional experience in DevOps or Site Reliability Engineering (SRE)
• Exposure to cloud platforms such as AWS, GCP, or Azure
• Familiarity with machine learning frameworks like TensorFlow or PyTorch
• Prior experience contributing to open-source projects
Engagement Details
• Fully remote freelance engagement
• Flexible workload, with scope to take on additional tasks
• Opportunity to work on real-world systems supporting advanced AI and infrastructure projects
Apply via Google form: https://forms.gle/SDgdn7meiicTNhvB8
About Biz-Tech Analytics:
Biz-Tech Analytics partners with global enterprises, AI labs, and industrial businesses to help them build and scale frontier AI systems. From data creation to deployment, the team delivers specialised services including human-in-the-loop annotation, reinforcement learning from human feedback (RLHF), and custom dataset creation.
With a network of 500+ vetted developers, STEM professionals, linguists, and domain experts, Biz-Tech Analytics supports leading global platforms by enhancing complex AI models and providing high-precision feedback at scale.
Their work sits at the intersection of advanced research, engineering rigor, and real-world AI deployment, making them a strong partner for cutting-edge AI initiatives.
As an Engineering Manager, you'll lead efforts to strengthen and optimize our state-of-the-art systems, ensuring high performance, scalability, and efficiency across our suite of trading solutions.
The core responsibilities for the job include the following:
Technical Expertise:
- C++ coding and debugging to strengthen and optimize systems.
- Design and architecture (HLD/LLD) to ensure scalable and robust solutions.
- Implementing and enhancing DevOps, Agile, and CI/CD pipelines to improve development workflows.
- Managing escalations and ensuring high-quality customer outcomes.
Architecture and Design:
- Define and refine the architectural vision and technical roadmap for enterprise software solutions.
- Design scalable, maintainable, and secure systems in line with business goals.
- Collaborate with stakeholders to translate requirements into technical solutions.
- Driving engineering initiatives to foster innovation, efficiency, and excellence.
Project Management:
- Oversee project timelines, deliverables, and quality assurance processes.
- Coordinate cross-functional teams to ensure seamless integration of systems.
- Identify risks and proactively implement mitigation strategies.
Technical Leadership:
- Lead and mentor a team of engineers, fostering a collaborative and high-performance culture.
- Provide technical direction and guidance on complex software engineering challenges.
- Drive code quality, best practices, and standards across the engineering team.
Requirements:
- 10-15 years in the tech industry, with 2-4 years in technical leadership or managerial roles.
- Technical Expertise: Expertise in C++ development, enterprise architecture, and scalable system design, and proficiency in performance optimization, scalability, software architecture, and networking principles.
- Extensive experience managing the full development lifecycle of large-scale software products, from concept to deployment.
- Strong knowledge of STL containers, multi-threading concepts, and algorithms.
- Solid understanding of memory management and efficient resource utilization.
- Microservices Architecture Expertise: Experience in designing and implementing scalable, reliable microservices.
- Strong Communication and Decision-making skills: Ability to clearly articulate trade-offs, make informed decisions, and ensure alignment across stakeholders.
- Commitment to Creating and fostering Engineering Excellence: Deep understanding of best practices, including code quality, testability, security, and release management, and passion for fostering a strong engineering culture and continuously improving developer workflows and tools.
- Self-Driven and Motivated: Ability to operate independently while driving impactful results.
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Backend Developer (Django)
About the Role:
We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.
Key Responsibilities:
- Develop and maintain Python-based web applications using Django and Django Rest Framework.
- Build and integrate RESTful APIs.
- Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
- Contribute to improving development workflows through automation.
- Assist in deploying applications using cloud platforms like Heroku or AWS.
- Write clean, maintainable, and efficient code.
Requirements:
Backend:
- Strong understanding of Django and Django Rest Framework (DRF).
- Experience with task queues like Celery.
Frontend (Basic Understanding):
- Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.
Hosting & Deployment:
- Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.
Linux/Server Knowledge:
- Basic to intermediate understanding of Linux commands and server environments.
- Ability to work with terminal, virtual environments, SSH, and basic server configurations.
Python Knowledge:
- Good grasp of OOP concepts.
- Familiarity with Pandas for data manipulation is a plus.
Soft & Team Skills:
- Strong collaboration and team management abilities.
- Ability to work in a team-driven environment and coordinate tasks smoothly.
- Problem-solving mindset and attention to detail.
- Good communication skills and eagerness to learn
What We Offer:
- A collaborative, friendly, and growth-focused work environment.
- Opportunity to work on real-time projects using modern technologies.
- Guidance and mentorship to help you advance in your career.
- Flexible and supportive work culture.
- Opportunities for continuous learning and skill development.
Location : Bhayander (Onsite)
Immediate to 30-day joiner and Mumbai-based candidate preferred.
Dear Candidate
Candidate must have:
- Minimum 3-5 years of experience working as a NOC Engineer / Senior NOC Engineer in the telecom/Product (preferably telecom monitoring) industry.
- BE in CS, EE, or Telecommunications from a recognized university.
- Knowledge of NOC Process
- Technology exposure towards Telecom – 5G,4G,IMS with a solid understanding of Telecom Performance KPI’s, and/or Radio Access Network. Knowledge of call flows will be advantage
- Experience with Linux OS and SQL – mandatory.
- Residence in Delhi – mandatory.
- Ready to work in a 24×7 environment.
- Ability to monitor alarms based on our environment.
- Capability to identify and resolve issues occurring in the RADCOM environment.
- Any relevant technical certification will be an added advantage.
Responsibilities:
- Based in RADCOM India offices, Delhi.
- Responsible for all NOC Monitoring and technical support -T1/T2 aspects required by the process for RADCOM’s solutions.
- Ready to participate under Customer Planned activities / execution and monitoring.
Artificial Intelligence Research Intern
We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact.
Key Responsibilities:
• Research, design, develop, and implement AI and Deep Learning algorithms.
• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and
data extraction.
• Evaluate and optimize machine learning and deep learning models.
• Collect, process, and analyze large-scale datasets.
• Use advanced techniques for text representation and classification.
• Write clean, efficient, and testable code for production-ready applications.
• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.).
• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences.
Required Skills and Experience:
• Theoretical and practical knowledge of AI, ML, and DL concepts.
• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc.
• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs).
• Familiarity with data structures, data modeling, and software architecture.
• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.).
• Comfortable working in Linux/UNIX environments.
• Basic knowledge of HTML, JavaScript, HTTP, and Networking.
• Strong communication skills and a collaborative mindset.
Job Type: Full-Time Internship
Location: In-Office (Bhayander)
About Phi Commerce
Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.
Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.
Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.
Awards & Recognitions:
The company innovative work has been recognized at prestigious forums in short span of its existence:
- Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
- Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
- Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
- Winner of NPCI IDEATHON on Blockchain in Payments
- Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors
About the role:
As an SDET, you will work closely with the development, product, and QA teams to ensure the delivery of high-quality, reliable, and scalable software. You will be responsible for creating and maintaining automated test suites, designing testing frameworks, and identifying and resolving software defects. The role will also involve continuous improvement of the test process and promoting best practices in software development and testing.
Key Responsibilities:
- Develop, implement, and maintain automated test scripts for validating software functionality and performance.
- Design and develop testing frameworks and tools to improve the efficiency and effectiveness of automated testing.
- Collaborate with developers, product managers, and QA engineers to identify test requirements and create effective test plans.
- Write and execute unit, integration, regression, and performance tests to ensure high-qualitycode.
- Troubleshoot and debug issues identified during testing, working with developers to resolve them in a timely manner.
- Conduct code reviews to ensure code quality, maintainability, and testability.
- Work with CI/CD pipelines to integrate automated testing into the development process.
- Continuously evaluate and improve testing strategies, identifying areas for automation and optimization.
- Monitor the quality of releases by tracking test coverage, defect trends, and other quality metrics.
- Ensure that all tests are documented, maintainable, and reusable for future software releases.
- Stay up-to-date with the latest trends, tools, and technologies in the testing and automation space.
Skills and Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 6+ years of experience as an SDET, software engineer, or quality engineer with a focus on test automation.
- Strong experience in automated testing frameworks and tools (e.g., Selenium, Appium JUnit, TestNG, Cucumber).
- Proficiency in programming languages with Java
- Experience in designing and implementing test automation for web applications, APIs, and mobile applications.
- Strong understanding of software testing methodologies and processes (e.g., Agile, Scrum).
- Excellent problem-solving skills and attention to detail.
- Good communication and collaboration skills, with the ability to work effectively in a team.
- Knowledge of performance testing and load testing tools is a plus (e.g., JMeter, LoadRunner)
- Experience with test management tools (e.g., TestRail, Jira).
- Knowledge of databases and ability to write SQL queries to validate test data.
- Experience in API testing and knowledge of RESTful web services.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Dear Candidate
Looking for Telecom Advance Support Engineer (PSO)
If this opportunity hits you Kindly revert.
Job Description:
3+ years’ experience working as a support engineer / Network Engineer /deployment Engineer/ Solutions Engineer/integration Engineer in the telecom deployment industry.
■ BE/ B.Sc. in CS, EE, Telecommunications graduate with honors from an elite university – (70%- 90% depending on the colleges. Delhi university a plus).
■ Telecom Knowledge (IMS, 4G) - Mandatory.
■ Linux OS, SQL knowledge – Mandatory. Vertica DB, scripting - a plus.
■ Open stack / Cloud based infrastructure (OpenStack/Kubernetes), knowledge is a plus
■ Be available to work off business hours to address critical matters/situations based on Radcom’s on-call support model
■ Willing to work in evening and night shifts and Weekend.
■ Fluent English – Mandatory
■ Valid passport
Visible to teammates
Add private requirements that your team and AI Resume Review will use to evaluate candidates.
Public: Show on job boards
■ Based in RADCOM India offices, Delhi, India.
■ Responsible for all the technical support aspects required by the client for RADCOM’s solutions, including integration, deployment, and customization of applications; and KPI reports for individual customer needs.
■ Support and deployment of Radcom’s solution at cloud-based customer environments (case to case basis)
■ Having very good customer interaction interface and able to drive customer calls independently.
■ Able to drive and deliver internal customer small project - end to end handling along with taking care of all internal required communications.
■ Working closely with the management on customer updates and future plan
■ Daily maintenance and problem resolution, System patches and software upgrades, and routine system configuration
■ Identifies, diagnoses, and resolves issues related to System with good troubleshooting and root cause finding.
■ If required: travel to on-site support outside India, training, installation, and configuration, etc
Thanks & Regards
Shreya Tiwari
Technical Recruiter - HR
RADCOM
Review Criteria
- Strong IT Engineer Profile
- 4+ years of hands-on experience in Azure/Office 365 compliance and management, including policy enforcement, audit readiness, DLP, security configurations, and overall governance.
- Must have strong experience handling user onboarding/offboarding, identity & access provisioning, MFA, SSO configurations, and lifecycle management across Windows/Mac/Linux environments.
- Must have proven expertise in IT Inventory Management, including asset tracking, device lifecycle, CMDB updates, and hardware/software allocation with complete documentation.
- Hands-on experience configuring and managing FortiGate Firewalls, including routing, VPN setups, policies, NAT, and overall network security.
- Must have practical experience with FortiGate WiFi, AP configurations, SSID management, troubleshooting connectivity issues, and securing wireless environments.
- Must have strong knowledge and hands-on experience with Antivirus Endpoint Central (or equivalent) for patching, endpoint protection, compliance, and threat remediation.
- Must have solid understanding of Networking, including routing, switching, subnetting, DHCP, DNS, VPN, LAN/WAN troubleshooting.
- Must have strong troubleshooting experience across Windows, Linux, and macOS environments for system issues, updates, performance, and configurations.
- Must have expertise in Cisco/Polycom A/V solutions, including setup, configuration, video conferencing troubleshooting, and meeting room infrastructure support.
- Must have hands-on experience in Shell Scripting / Bash / PowerShell for automation of routine IT tasks, monitoring, and system efficiencies.
Job Specific Criteria:
- CV Attachment is mandatory
- Q1. Please share details of experience in troubleshooting (Rate out of 10, 10 being highly experienced) A. Windows Troubleshooting B. Linux Troubleshooting C. Macbook Troubleshooting
- Q2. Please share details of experience in below process (Rate out of 10, 10 being highly experienced) A. User Onboarding/Offboarding B. Inventory Management
- Q3. Please share details of experience in below tools and administrations (Rate out of 10, 10 being highly experienced) A. FortiGate Firewall B. FortiGate WiFi C. Antivirus Endpoint Central D. Networking E. Cisco/Polycom A/V solutions F. Shell Scripting/Bash/PowerShell G. Azure/Office 365 compliance and management
- Q4. Are you okay for F2F round (Noida)?
- Q5. What's you current company?
- Q6. Are you okay for rotational shift (10am - 7pm and 2pm to 11pm)?
Role & Responsibilities:
We are seeking an experienced IT Infrastructure/System Administrator to manage, secure, and optimize our IT environment. The ideal candidate will have expertise in enterprise-grade tools, strong troubleshooting skills, and hands-on experience configuring secure integrations, managing endpoint deployments, and ensuring compliance across platforms.
- Administer and manage Office 365 suite (Outlook, SharePoint, OneDrive, Teams etc) and related services/configurations.
- Handle user onboarding and offboarding, ensuring secure and efficient account provisioning and deprovisioning.
- Oversee IT compliance frameworks, audit processes, and IT asset inventory management, attendance systems.
- Administer Jira, FortiGate firewalls and Wi-Fi, antivirus solutions, and endpoint management systems.
- Provide network administration: routing, subnetting, VPNs, and firewall configurations.
- Support, patch, update, and troubleshoot Windows, Linux, and macOS environments, including applying vulnerability fixes and ensuring system security.
- Manage Assets Explorer for device and asset management/inventory.
- Set up, manage, and troubleshoot Cisco and Polycom audio/video conferencing systems.
- Provide remote support for end-users, ensuring quick resolution of technical issues.
- Monitor IT systems and network for performance, security, and reliability, ensuring high availability.
- Collaborate with internal teams and external vendors to resolve issues and optimize systems.
- Document configurations, processes, and troubleshooting procedures for compliance and knowledge sharing.
Ideal Candidate:
- Proven hands-on experience with:
- Office 365 administration and compliance.
- User onboarding/offboarding processes.
- Compliance, audit, and inventory management tools.
- Jira administration, FortiGate firewall, Wi-Fi, and antivirus solutions.
- Networking fundamentals: subnetting, routing, switching.
- Patch management, updates, and vulnerability remediation across Windows, Linux, and macOS.
- Assets Explorer/inventory management
- Strong troubleshooting, documentation, and communication skills.
Preferred Skills:
- Scripting knowledge in Bash, PowerShell for automation.
- Experience working with Jira and Confluence.
Perks, Benefits and Work Culture:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
AI based systems design and development, entire pipeline from image/ video ingest, metadata ingest, processing, encoding, transmitting.
Implementation and testing of advanced computer vision algorithms.
Dataset search, preparation, annotation, training, testing, fine tuning of vision CNN models. Multimodal AI, LLMs, hardware deployment, explainability.
Detailed analysis of results. Documentation, version control, client support, upgrades.
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
About the Role
We are seeking an accomplished DevOps Lead with 12+ years of experience in cloud infrastructure, automation, Blockchain, and CI/CD processes. The DevOps Lead will play a pivotal role in architecting scalable cloud environments, driving automation, ensuring secure deployments, and enabling efficient software delivery pipelines. The role involves working with AWS, Huawei Cloud, Kubernetes, Terraform, blockchain-based infrastructure, and modern DevOps toolchains while providing leadership, technical guidance, and client-facing communication.
Key Responsibilities
Leadership & Team Management
● Lead, mentor, and grow a team of DevOps engineers, setting technical direction and ensuring adherence to best practices.
● Facilitate collaboration across engineering, QA, security, and blockchain development teams.
● Act as the primary technical liaison with clients, managing expectations, requirements, and solution delivery.
Infrastructure Automation & Management
● Architect, implement, and manage infrastructure as code (IaC) using Terraform across multi-cloud environments.
● Standardize environments across AWS, Digital Ocean, Huawei Cloud with a focus on scalability, reliability, and security.
● Manage provisioning, scaling, monitoring, and cost optimization of infrastructure resources.
CI/CD & Automation
● Build, maintain, and optimize CI/CD pipelines supporting multiple applications and microservices.
● Integrate automated testing, static code analysis, and security scans into the pipelines.
● Implement blue-green / canary deployments and ensure zero downtime release strategies.
● Promote DevSecOps by embedding security policies into every phase of the delivery pipeline.
Containerization & Orchestration
● Deploy, manage, and monitor applications on Kubernetes clusters (EKS, CCE, or equivalent).
● Utilize Helm charts, Kustomize, and operators for environment consistency.
● Optimize container performance and manage networking, storage, and secrets.
Monitoring, Logging & Incident Response
● Implement and manage monitoring and alerting solutions (Prometheus, Grafana, ELK, CloudWatch, Loki).
● Define SLOs, SLIs, and SLAs for production systems.
● Lead incident response, root cause analysis, and implement preventative measures.
Governance, Security & Compliance
● Implement best practices for secrets management, key rotation, and role-based access control.
● Integrate vulnerability scanning and security audits into pipelines.
Required Skills & Qualifications
● 12+ years of experience in DevOps, with at least 5+ years in a lead capacity.
● Proven expertise with Terraform and IaC across multiple environments.
● Strong hands-on experience with AWS and Huawei Cloud infrastructure services.
● Deep expertise in Kubernetes cluster administration, scaling, monitoring, and networking.
● Advanced experience designing CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, or similar.
● Solid background in automated deployments, configuration management, and version control (Git, Ansible, Puppet, or Chef).
● Strong scripting and automation skills (Python, Bash, Go, or similar).
● Proficiency with monitoring/observability tools (Prometheus, Grafana, ELK, CloudWatch, Datadog).
● Strong understanding of blockchain infrastructure, node operations, staking setups, and deployment automation.
● Knowledge of container security, network policies, and zero-trust principles.
● Excellent communication, client handling, and stakeholder management skills with proven ability to present complex DevOps concepts to non-technical audiences.
● Ability to design and maintain highly available, scalable, and fault-tolerant systems in production environments.
Department: S&C – Site Reliability Engineering (SRE)
Experience Required: 4–8 Years
Location: Bangalore / Pune /Mumbai
Employment Type: Full-time
- Provide Tier 2/3 technical product support to internal and external stakeholders.
- Develop automation tools and scripts to improve operational efficiency and support processes.
- Manage and maintain system and software configurations; troubleshoot environment/application-related issues.
- Optimize system performance through configuration tuning or development enhancements.
- Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments.
- Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles.
- Drive automation initiatives for release and deployment processes.
- Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments.
- Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks.
Key Competencies
- Strong analytical, troubleshooting, and critical thinking abilities.
- Excellent cross-functional collaboration skills.
- Strong focus on documentation, process improvement, and system reliability.
- Proactive, detail-oriented, and adaptable in a fast-paced work environment.
Junior DevOps Engineer
Experience: 2–3 years
About Us
We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.
Key Responsibilities
- Support deployment, monitoring, and maintenance of trading and fintech applications.
- Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
- Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
- Troubleshoot and resolve production issues in a fast-paced environment.
- Implement and maintain CI/CD pipelines for continuous integration and delivery.
- Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
- Assist in maintaining compliance with financial industry standards and security best practices.
Required Skills
- 2–3 years of hands-on experience in DevOps or related roles.
- Proficiency in Linux/Unix environments.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Working knowledge of scripting languages (Bash, Python).
- Experience with configuration management tools (Ansible, Puppet, Chef).
- Understanding of networking concepts and security practices.
- Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
- Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).
Preferred Skills
- Experience in fintech, trading, or financial services.
- Knowledge of high-frequency trading systems or low-latency environments.
- Familiarity with financial data protocols and APIs.
- Understanding of regulatory requirements in financial technology.
What We Offer
- Opportunity to work on cutting-edge fintech/trading platforms.
- Collaborative and learning-focused environment.
- Competitive salary and benefits.
- Career growth in a rapidly expanding domain.
Review Criteria
- Strong DevOps /Cloud Engineer Profiles
- Must have 3+ years of experience as a DevOps / Cloud Engineer
- Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
- Must have strong hands-on experience in Linux administration and system management
- Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
- Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
- Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
- Must be proficient in scripting languages such as Python or Bash for automation
- Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
- Top tier Product-based company (B2B Enterprise SaaS preferred)
Preferred
- Experience in multi-tenant SaaS infrastructure scaling.
- Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.
Role & Responsibilities
We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.
Key Responsibilities:
- Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
- Build and optimize CI/CD pipelines to support rapid release cycles.
- Manage containerization & orchestration (Docker, Kubernetes).
- Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
- Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
- Drive cloud security automation (IAM, SSL, secrets management).
- Partner with engineering teams to embed DevOps into SDLC.
- Troubleshoot production issues and drive incident response.
- Support multi-tenant SaaS scaling strategies.
Ideal Candidate
- 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
- Strong expertise in AWS, Azure, or GCP.
- Strong expertise in LINUX Administration.
- Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
- Proficient in Terraform/Ansible/CloudFormation.
- Strong scripting skills (Python, Bash).
- Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
- Strong grasp of cloud security best practices.
🚀 Hiring: PL/SQL Developer
⭐ Experience: 5+ Years
📍 Location: Pune
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
What We’re Looking For:
☑️ Hands-on PL/SQL developer with strong database and scripting skills, ready to work onsite and collaborate with cross-functional financial domain teams.
Key Skills:
✅ Must Have: PL/SQL, SQL, Databases, Unix/Linux & Shell Scripting
✅ Nice to Have: DevOps tools (Jenkins, Artifactory, Docker, Kubernetes),
✅AWS/Cloud, Basic Python, AML/Fraud/Financial domain, Actimize (AIS/RCM/UDM)
Job Title: Senior Devops Engineer (Full-time)
Location: Mumbai, Onsite
Experience Required: 5+ Years
Required Qualifications
● Experience:
○ 5+ years of hands-on experience as a DevOps Engineer or similar role, with
proven expertise in building and customizing Helm charts from scratch (not just
using pre-existing ones).
○ Demonstrated ability to design and whiteboard DevOps pipelines, including
CI/CD workflows for microservices applications.
○ Experience packaging and deploying applications with stateful dependencies
(e.g., databases, persistent storage) in varied environments: on-prem (air-gapped
and non-air-gapped), single-tenant cloud, multi-tenant cloud, and developer trials.
○ Proficiency in managing deployments in Kubernetes clusters, including offline
installations, upgrades via Helm, and adaptations for client restrictions (e.g., no
additional tools or VMs).
○ Track record of handling client interactions, such as asking probing questions
about infrastructure (e.g., OS versions, storage solutions, network restrictions)
and explaining technical concepts clearly.
● Technical Skills:
○ Strong knowledge of Helm syntax and functionalities (e.g., Go templating, hooks,
subcharts, dependency management).
○ Expertise in containerization with Docker, including image management
(save/load, registries like Harbor or ECR).
○ Familiarity with CI/CD tools such as Jenkins, ArgoCD, GitHub Actions, and
GitOps for automated and manual deployments.
○ Understanding of storage solutions for on-prem and cloud, including object/file
storage (e.g., MinIO, Ceph, NFS, cloud-native like S3/EBS).
○ In-depth knowledge of Kubernetes concepts: StatefulSets, PersistentVolumes,
namespaces, HPA, liveness/readiness probes, network policies, and RBAC.
○ Solid grasp of cloud networking: VPCs (definition, boundaries, virtualization via
SDN, differences from private clouds), bare metal vs. virtual machines
(advantages like resource efficiency, flexibility, and scalability).
○ Ability to work in air-gapped environments, preparing offline artifacts and
ensuring self-contained deployment
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will
be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction
in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in Computer Science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
Job description:
Roles & Responsibilities
At Dolat, code is our business, so naturally, the Core Engineering and Systems team is at the centre of what we do. Our community of developers has designed and continues to enhance one of the fastest trading platforms using the latest tools and technologies. As a Software Developer, you’ll draw upon your computer science, mathematical, and analytical abilities to develop complex and nimble code used to grow our business and increase the efficiency of the global financial markets. Your responsibilities may include any of the following, which will require you to exercise discretion and independent judgment:
Augmenting, improving, redesigning, and/or re-implementing Dolat's low-latency/high throughput production trading environment, which collects data from and disseminates orders to exchanges around the world
Optimizing this platform by using network and systems programming, as well as other advanced techniques
Developing systems that provide easy access to historical market data and trading simulations
Building risk-management and performance-tracking tools
Shaping the future of Dolat through regular interviewing and infrequent campus recruiting trips
Implementing domain-optimized data structures
Learn and internalize the theories behind current trading system
Participate in the design, architecture and implementation of automated trading systems while taking ownership of system from design through implementation
Skills & Experience
A strong background in data structures, algorithms, and object-oriented programming in C++
Exchange Connectivity experience a plus
Familiarity with Linux environments; Windows a plus
High level knowledge & competencies in one or more of the following areas: TCP stack optimization Multi-core 1 machine parallelism Low level performance / cache optimization / profiling
Additional requirements include:
Experience in distributed and/or highly concurrent systems is a plus
Experience in low-latency systems and/or high transaction environments is a plus
A passion for new technologies and ideas
The ability to manage multiple tasks in a fast-paced environment
Experience in network topologies and protocols like TCP and UDP
Job Description for PostgreSQL Lead
Job Title: PostgreSQL Lead
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Our Customer Account Management team is vital in ensuring client satisfaction and loyalty.
Role Overview
As the PostgreSQL Lead, you will own the design, implementation, and operational excellence of PostgreSQL environments. You’ll lead technical decision-making, mentor the team, interface with customers, and drive key initiatives covering performance tuning, HA architectures, migrations, and cloud deployments.
Key Responsibilities
- Lead PostgreSQL production environments: architecture, stability, performance, and scalability
- Oversee complex troubleshooting, query optimization, and performance analysis
- Architect and maintain HA/DR systems (e.g., Streaming Replication, Patroni, repmgr)
- Define backup, recovery, replication, and failover protocols
- Guide DB migrations, patches, and upgrades across environments
- Collaborate with DevOps and cloud teams for infrastructure automation
- Use monitoring (pg_stat_statements, PMM, Nagios or any monitoring stack) to proactively resolve issues
- Provide technical mentorship—conduct peer reviews, upskill, and onboard junior DBAs
- Lead customer interactions: understand requirements, design solutions, and present proposals
- Drive process improvements and establish database best practices
Requirements
- Experience: 4-5 years in PostgreSQL administration, with at least 2+ years in a leadership role
- Performance Optimization: Expert in query tuning, indexing strategies, partitioning, and execution plan analysis.
- Extension Management: Proficient with critical PostgreSQL extensions including:
- pg_stat_statements – query performance tracking
- pg_partman – partition maintenance
- pg_repack – online table reorganization
- uuid-ossp – UUID generation
- pg_cron – native job scheduling
- auto_explain – capturing costly queries
- Backup & Recovery: Deep experience with pgBackRest, Barman, and implementing Point-in-Time Recovery (PITR).
- High Availability & Clustering: Proven expertise in configuring and managing HA environments using Patroni, repmgr, and streaming replication.
- Cloud Platforms: Strong operational knowledge of AWS RDS and Aurora PostgreSQL, including parameter tuning, snapshot management, and performance insights.
- Scripting & Automation: Skilled in Linux system administration, with advanced scripting capabilities in Bash and Python.
- Monitoring & Observability: Familiar with pg_stat_statements, PMM, Nagios, and building custom dashboards using Grafana and Prometheus.
- Leadership & Collaboration: Strong problem-solving skills, effective communication with stakeholders, and experience leading database reliability and automation initiatives.
Preferred Qualifications
- Bachelor’s/Master’s degree in CS, Engineering, or equivalent
- PostgreSQL certifications (e.g., EDB, AWS)
- Consulting/service delivery experience in managed services or support roles
- Experience in large-scale migrations and modernization projects
- Exposure to multi-cloud environments and DBaaS platforms
What We Offer:
- Competitive salary and benefits package.
- Opportunity to work with a dynamic and innovative team.
- Professional growth and development opportunities.
- Collaborative and inclusive work environment.
Job Details:
- Work time: General shift
- Working days: 5 Days
- Mode of Employment - Work From Home
- Experience - 4-5 years
Job Overview :
We are looking for an experienced PL/SQL Developer to join our Professional Services team. The role involves developing and configuring enterprise-grade solutions, supporting clients during testing, and collaborating with internal teams. Candidates with strong expertise in PL/SQL and Unix/Linux are preferred, along with exposure to cloud, DevOps, or financial domains.
Key Responsibilities
- Develop and configure software features as per design specifications and enterprise standards.
- Interact with clients to resolve technical queries and support User Acceptance Testing (UAT).
- Collaborate with internal R&D, Professional Services, and Customer Support teams.
- Occasionally work at client sites or across different time zones.
- Ensure secure, scalable, and high-quality code.
Must-Have Skills
- Strong hands-on experience in PL/SQL, SQL, and Databases (Oracle, MS-SQL, MySQL, Postgres, MongoDB).
- Proficiency in Unix/Linux commands and shell scripting.
Nice-to-Have Skills
- Basic understanding of DevOps tools (Jenkins, Artifactory, Docker, Kubernetes).
- Exposure to Cloud environments (AWS preferred).
- Awareness of Python programming.
- Experience in AML, Fraud, or Financial Markets domain.
- Knowledge of Actimize (AIS/RCM/UDM).
Education & Experience
- Bachelor’s degree in Computer Science, Engineering, or equivalent.
- 4–8 years of overall IT experience, with 4+ years in software development.
- Strong problem-solving, communication, and customer interaction skills.
- Ability to work independently in time-sensitive environments.
About the Role:
We’re looking for a Python Developer to build, optimize, and scale applications that power our trading systems. You’ll work on automation, server clusters, and high-performance infrastructure in a fast-paced, tech-driven environment.
What you’ll do:
- Build and test applications end-to-end.
- Automate workflows with scripts.
- Optimize system performance and reliability.
- Manage code versions and collaborate with peers.
- Work with clusters of 100+ servers.
What we’re looking for:
- Strong Python fundamentals (OOP, data structures, algorithms).
- Experience with Linux commands, Bash scripting.
- Basics of Numpy, Matplotlib, and PostgreSQL.
- Hands-on with automation and scripting tools.
- Problem solver with a focus on scalability & optimization.
Why Dolat?
- Work at the intersection of finance & tech.
- Opportunity to solve complex engineering problems.
- Learn and grow with a team of smart, collaborative engineers.
Key Skills & Experience:
🔹 #SQL, #OraclePLSQL, #Unix / Perl / Python
🔹 #OracleRDBMS (latest) | #PostgreSQL plus
🔹 DB Objects | Performance Tuning | Analytics
🔹 #UnixLinux, Agile (Scrum, JIRA, Confluence, BitBucket)
🔹 DevOps (UC4, Maven, Jenkins, CI/CD)
🔹 ETL, Data Modeling, Informatica
🔹 Java, Angular/React, Spring Boot, XML, JSON, API Gateway
🔹 #OracleAPEX advantage
We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)
Job Opening: FreeSWITCH Developer
Location: Ecospace Business Park, Rajarhat, Newtown, Kolkata
Employment Type: Full-Time, Permanent
Shift: Day Shift
Experience: Minimum 4+ years in FreeSWITCH/VoIP Development
Responsibilities:
- Design, develop, deploy, troubleshoot, and maintain tools and services supporting our cloud telephony network.
- Customize FreeSWITCH for large-scale audio/video conferencing (1000–1500 concurrent calls).
- Expertise in SIP, RTP, RTCP, TURN, STUN, NAT, TLS.
- Hands-on experience with RTP Proxy and routed audio conferences.
- Understanding of SDP Protocol offer/answer mechanism.
- Work with load testing tools for FreeSWITCH audio conferences.
- Deploy and manage multiple FreeSWITCH instances using load balancers.
- Debug issues using packet captures (Wireshark/Ngrep).
- Collaborate with mobile/API teams for integration and support.
- Knowledge of codecs (PCMU, PCMA, G729, Opus) and open-source telephony technologies (FreeSWITCH, WebRTC).
- Familiarity with SIP servers (SER/OpenSER), proxy servers, SBC, SIPX is a plus.
Qualifications:
- Bachelor’s degree in Engineering (B.Tech/MCA).
- 4+ years of hands-on experience in FreeSWITCH development or related VoIP technologies.
- Strong knowledge of Linux environments and command-line tools.
- Basic to intermediate SQL knowledge.
- Proficiency in scripting (Bash, Python, Perl) for automation.
- Strong understanding of VoIP protocols (SIP, RTP), networking principles (TCP/IP, DNS, DHCP, routing protocols).
- Excellent troubleshooting skills with ability to resolve complex technical issues.
- Strong problem-solving, communication, and collaboration skills.
Industry Preference:
- Travel (Preferred)
- Any Industry (Required)



















