
Job Summary :
As a Firmware Developer, you will be responsible for designing, developing, and optimizing embedded firmware for Bluetooth Low Energy (BLE) medical devices. You will collaborate closely with mobile, cloud, and hardware teams to ensure seamless communication and system reliability.
Location : Hyderabad
Key Responsibilities
● Firmware Development – Architect, implement, and optimize robust embedded firmware for BLE-based medical devices.
● BLE Communication – Ensure reliable BLE communication by fine-tuning GATT profiles, GAP settings, and connection parameters.
● Memory & Performance Optimization – Manage static memory allocation, flash memory layout, and power efficiency in resource-constrained environments.
● Cross-Platform BLE Handling – Work with mobile teams to handle BLE behavior inconsistencies across iOS and Android.
● Debugging & Optimization – Utilize BLE sniffers, debugging tools, and real-time logging to troubleshoot firmware issues.
● Security & Compliance – Implement secure pairing, bonding, and OTA firmware updates while adhering to medical device standards.
Required Skills & Expertise
● Strong experience of 4+ years in Firmware/Embedded Development. ● Strong knowledge of BLE stack APIs (GATT, GAP, L2CAP) and BLE protocol internals (advertising, connection events, link layer).
● Proficiency in C for embedded systems, with expertise in static memory management.
● Experience with wear leveling, sector erase schemes, and endurance techniques.
● Familiarity with BLE connectivity challenges on iOS & Android and ability to mitigate inconsistencies.
● Hands-on experience with Debugging like Wireshark, TI SmartRF Sniffer, or equivalent.
● Exposure to BLE-based cloud workflows and real-time data synchronization.
● RTOS Knowledge, Understanding of task scheduling, ISR management, and power-optimized firmware.
● Experience with TI CC2640R2F & TI-RTOS is a plus.
Nice to Have
● OTA Firmware Updates: Experience with secure BLE pairing, bonding, and firmware upgrade mechanisms.
● Embedded Diagnostic Tools: Ability to develop real-time diagnostics for memory usage, BLE packet flow, and connection stability trends.
Why Join Monitra Healthcare?
🔬 Impact-Driven Work: Build life-saving medical technologies that make a real difference.
🚀 Cutting-Edge Tech: Work with advanced BLE, IoT, and AI-powered healthcare solutions.
🤝 Collaborative Team: Engage with a multidisciplinary team of engineers, data scientists, and healthcare experts.
🔗 Join us in shaping the future of connected healthcare!

About Monitra Healthcare
About
Similar jobs
We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.
Key Responsibilities
- Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders
- DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization
- Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation
- System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations
- Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems
- Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices
- Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements
Qualifications and Experience
- Bachelor's degree in Computer Science, Engineering, or related technical field
- Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
- Excellent knowledge with cloud technologies, containers and orchestration.
- Proficiency in scripting and programming languages such as Python and Bash.
- Experience with Linux operating systems and command-line tools.
- Proficient in using Git for version control.
Good to Have
- Experience with Nagios or similar monitoring and alerting systems
- Backend and/or frontend development experience for operational tooling
- Previous experience working in a trading firm or financial services environment
- Knowledge of database management and SQL
- Familiarity with cloud platforms (AWS, Azure, GCP)
- Experience with DevOps practices and CI/CD pipelines
- Understanding of network protocols and system administration
Why You’ll Love Working Here
We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.
Here’s what we offer:
💰 Competitive salary – Get paid what you’re worth.
🌴 Generous paid time off – Recharge and come back sharper.
🌍 Work with the best – Collaborate with top-tier global talent.
✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.
🎯 Performance rewards – Multiple bonuses for those who go above and beyond.
🏥 Health covered – Comprehensive insurance so you’re always protected.
⚡ Fun, not just work – On-site sports, games, and a lively workspace.
🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.
📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.
🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.
🚚 Relocation support – Smooth move? We’ve got your back.
🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.
We work hard, play hard, and grow together. Join us.
(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)
We are looking for two Senior DevOps Engineer to join our Mumbai-based infrastructure team for a critical on-premises deployment project. This role is focused on transforming manual, legacy deployment practices into structured, secure, and compliant processes within a Windows-first, latency-sensitive environment.
The successful candidate will drive the creation of SOPs, deployment pipelines (without containerization), and a staging environment to support a hybrid stack of ASP.NET MVC (.NET), MS SQL Server (replication mode), and Java microservices with MySQL. This position requires on-site presence in Mumbai due to regulatory and infrastructure constraints and will play a key role in ensuring compliance with SEBI, RBI, PFMI, and IOSCO standards.
Key responsibility would be to lead deployment modernization efforts in a secure, on-premises environment based in Mumbai. The role involves working with legacy Windows infrastructure, ASP.NET MVC apps, MS SQL replication, and manual deployment processes. No containerization or CI/CD tools are in place, so we’re looking for someone who can establish automation and structure from the ground up.
Mandatory: On-site availability in Mumbai, strong experience with manual Windows-based deployments, regulatory compliance awareness (SEBI/RBI/PFMI).
Duration: 3-6 months | Immediate start
● Auditing, monitoring and improving existing infrastructure components of highly available and scaled
product on cloud with Ubuntu servers
● Running daily maintenance tasks and improving it with possible automation
● Deploying new components, server and other infrastructure when needed
● Coming up with innovative ways to automate tasks
● Working with telecom carriers and getting rates and destinations and update regularly on the system
● Working with Docker containers, Tinc, Iptables, HAproxy, ETCD, mySQL, mongoDB, CouchDB and
ansible
You would be bringing below skills to our team :
● Expertise with Docker containers and its networking, Tinc, Iptables, HAproxy, ETCD, and ansible
● Extensive experience with setup, maintenance, monitoring, backup and replication with mySQL
● Expertise with the Ubuntu servers and its OS and server level networking
● Good experience of working with mongoDB, CouchDB
● Good with the networking tools
● Open Source server monitoring solutions like nagios, Zabbix etc.
● Worked on highly scaled, distributed applications running on the Datacenter Ubuntu VPS instances
● Innovative and out of box thinker with multitasking skills working in a small team efficiently
● Working Knowledge of any scripting languages like bash, node or python
● It would be an advantage if have experience with the calling platforms like FreeSWITCH, OpenSIPS or
Kamailio and have basic knowledge of SIP protocol
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
We are looking for a DevOps Engineer (individual contributor) to maintain and build upon our next-generation infrastructure. We aim to ensure that our systems are secure, reliable and high-performing by constantly striving to achieve best-in-class infrastructure and security by:
- Leveraging a variety of tools to ensure all configuration is codified (using tools like Terraform and Flux) and applied in a secure, repeatable way (via CI)
- Routinely identifying new technologies and processes that enable us to streamline our operations and improve overall security
- Holistically monitoring our overall DevOps setup and health to ensure our roadmap constantly delivers high-impact improvements
- Eliminating toil by automating as many operational aspects of our day-to-day work as possible using internally created, third party and/or open-source tools
- Maintain a culture of empowerment and self-service by minimizing friction for developers to understand and use our infrastructure through a combination of innovative tools, excellent documentation and teamwork
Tech stack: Microservices primarily written in JavaScript, Kotlin, Scala, and Python. The majority of our infrastructure sits within EKS on AWS, using Istio. We use Terraform and Helm/Flux when working with AWS and EKS (k8s). Deployments are managed with a combination of Jenkins and Flux. We rely heavily on Kafka, Cassandra, Mongo and Postgres and are increasingly leveraging AWS-managed services (e.g. RDS, lambda).
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
9+ Location
Budget – BG, Hyderabad, Remote , Hybrid –
Budget – up to 30 LPA
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in working with cloud technologies, set up efficient deployment processes, and are motivated to work with diverse and talented teams, we’d like to meet you.
Ultimately, you will execute and automate operational processes fast, accurately, and securely.
Skills and Experience
-
4+ years of experience in building infrastructure experience with Cloud Providers ( AWS, Azure, GCP)
-
Experience in deploying containerized applications build on NodeJS/PHP/Python to kubernetes cluster.
-
Experience in monitoring production workload with relevant metrics and dashboards.
-
Experience in writing automation scripts using Shell, Python, Terraform, etc.
-
Experience in following security practices while setting up the infrastructure.
-
Self-motivated, able, and willing to help where help is needed
-
Able to build relationships, be culturally sensitive, have goal alignment, have learning agility
Roles and Responsibilities
-
Manage various resources across different cloud providers. (Azure, AWS, and GCP)
-
Monitor and optimize infrastructure cost.
-
Manage various kubernetes clusters with appropriate monitoring and alerting setup.
-
Build CI/CD pipelines to orchestrate provisioning and deployment of various services into kubernetes infrastructure.
-
Work closely with the development team on upcoming features to determine the correct infrastructure and related tools.
-
Assist the support team with escalated customer issues.
-
Develop, improve, and thoroughly document operational practices and procedures.
-
Responsible for setting up good security practices across various clouds.







