11+ FMEA Jobs in Chennai | FMEA Job openings in Chennai
Apply to 11+ FMEA Jobs in Chennai on CutShort.io. Explore the latest FMEA Job opportunities across top companies like Google, Amazon & Adobe.
In-depth knowledge in Healthcare and BPaaS domain, Good understanding in Delivery Excellence Framework, Expertise in business and IT processes, platforms, operations, controls and dependencies Secondary Skills: Intermediate to Expert proficiency in conducting audits & assessments in BPaaS domain areas, Proficiency in building, monitoring and testing controls for BPaaS
- Provide facilitation to ensure noiseless delivery and enable best practices adoption & evangelization through appropriate processes and platforms
- Solution consultant for projects to adopt right tools and measure to ensure product & service quality in the appropriate service lines
- Ensure effective governance of projects and deliverables
- Proactive risk management and mitigation, build known risks database to identify and mitigate risks proactively and establish oversight to monitor quality, targets and spending on implementation plan
- Enabling and co-working with the accounts / projects in compliance of regulatory requirements, other certification requirements of the Organization and the Customers for BPaaS domain. Support closure of external/internal audit findings, RCA/CAPA/FMEA for escalations
- Establish Integrated Risk and Delivery Governance models, and oversight to monitor quality, targets, overrun and penalty
- Build process mapping for BPaaS describing the high level activities across Product Service Lines establish ETVX, role definition, gating criteria, SLAs, OLAs and RACI
- Design and ensure operational and compliance controls are implemented, tested and monitored
- Build core competency to carry out DEx BPaaS audits and assessments

About Ampera
Ampera builds enterprise-grade AI platforms that sit at the intersection of large-scale data systems, intelligent orchestration, and applied AI.
Our products help Fortune 1000 companies optimize operations, manage risk, and unlock decision intelligence using AI, LLMs, and agentic workflows.
Role Overview
We are looking for a strong Full Stack Engineer who has built production-grade enterprise applications, worked with large datasets, and is excited about integrating AI systems and LLM-powered workflows into real-world platforms.
This is not a UI-only or API-only role.
You’ll own end-to-end system design, from frontend experiences to backend orchestration and AI integration.
What You’ll Work On
- Enterprise web platforms used by analysts, admins, and leadership
- High-scale data-intensive applications (query orchestration, risk intelligence, estimation engines)
- AI-augmented workflows (LLMs, agents, optimizers, explainability layers)
- Secure, governed, multi-tenant systems with role-based access
Key Responsibilities
Full Stack Development
- Design and build scalable web applications (frontend + backend)
- Develop API-first backend services for enterprise workflows
- Build admin dashboards, analyst workflows, and decision-ready UIs
- Ensure performance, reliability, and maintainability at scale
Backend & Data Systems
- Work with large-scale relational databases (Azure Synapse, Redshift, Snowflake, PostgreSQL, SQL Server, etc.)
- Design data models for high-volume, analytical workloads
- Implement caching, orchestration, background processing, and async workflows
- Integrate with enterprise systems (identity, BI tools, data platforms)
AI / LLM Integration
- Integrate LLMs (OpenAI, Azure OpenAI, etc.) into production systems
- Build AI-powered services:
- Query optimization
- Risk scoring & explainability
- Estimation & prediction workflows
- Implement agentic AI patterns (multi-step reasoning, tool-using agents, orchestration)
- Work closely with ML engineers to productionize AI models
Enterprise Readiness
- Implement RBAC, audit logging, governance, and guardrails
- Design systems with security, compliance, and observability in mind
- Contribute to CI/CD pipelines, deployments, and production monitoring
Required Skills & Experience
Core Engineering
- 4–8+ years of experience as a Full Stack Engineer
- Strong backend experience with Python (FastAPI preferred) or equivalent
- Solid frontend experience with React / modern JS frameworks
- Strong understanding of REST APIs, async processing, microservices
Data & Systems
- Hands-on experience with large databases and complex schemas
- Strong SQL skills and experience optimizing queries
- Experience building enterprise-grade, data-heavy applications
AI / Modern Stack
- Practical experience integrating LLMs or AI services into applications
- Familiarity with:
- Prompting & structured outputs
- AI evaluation & guardrails
- Explainability and risk-aware AI
- Exposure to agentic AI frameworks or multi-step AI workflows is a big plus
Nice-to-Have (Strong Differentiators)
- Experience building AI-powered SaaS or internal enterprise platforms
- Background in analytics, risk systems, finance, supply chain, or operations
- Experience with Redis, message queues, background workers
- Familiarity with Azure / AWS, containerization, Kubernetes
- Ability to translate business problems → system design
What We Look For (Beyond Skills)
- Strong systems thinking — you see the whole picture
- Comfortable operating in ambiguous, zero-to-one builds
- Ability to reason about scale, cost, and enterprise constraints
- Builder mindset — you ship, iterate, and own outcomes
Why Join Ampera
- Work on real enterprise AI platforms, not demos or chatbots
- Exposure to LLMs, agentic AI, and applied AI at scale
- High ownership, senior-heavy team, minimal bureaucracy
- Opportunity to shape core architecture and AI strategy
Job Title: Senior Linux Kernel Engineer
Experience: 5–10 Years
Location: Bangalore / Chennai
Domain: Enterprise Linux / Kernel Development
Job Summary
We are seeking a highly skilled Senior Linux Kernel Engineer with deep expertise in kernel development, debugging, and performance optimization. The role involves working on enterprise-grade Linux distributions, kernel lifecycle management, security patching, and low-level hardware integration.
Key Responsibilities
1. Kernel Lifecycle & Maintenance
- Lead kernel upgrade strategies (e.g., LTS migrations such as 5.15 → 6.x) while ensuring stability and compatibility.
- Perform patch porting across kernel versions, resolving API and dependency conflicts.
- Track and mitigate security vulnerabilities by monitoring CVEs and upstream sources (e.g., LKML).
- Backport critical fixes to production kernels without impacting system stability.
2. Debugging & System Stability
- Act as an escalation point for kernel panics and system crashes.
- Perform post-mortem analysis using kdump, crash, and gdb.
- Debug early boot issues (UEFI, initramfs, kernel initialization).
- Conduct performance analysis using eBPF, ftrace, and perf to optimize system behavior.
3. Driver Development & Hardware Integration
- Design, develop, and maintain device drivers (network, storage, GPU, or character devices).
- Work closely with hardware through DMA, interrupts (MSI-X), and register-level programming.
- Maintain out-of-tree drivers using DKMS or similar frameworks.
- Ensure compatibility of drivers across kernel updates.
Required Technical Skills
- Programming: Strong expertise in C (mandatory) and C++
- Kernel Internals: Deep understanding of:
- Virtual File System (VFS)
- Memory Management (MMU, Paging)
- Process Scheduler
- Linux Networking Stack
- Debugging Tools:
- kdump, crash, gdb
- kprobes, trace-cmd, ftrace
- perf, valgrind
- Hardware debugging tools (JTAG, Serial Console)
- Build Systems:
- Kbuild, Makefiles
- Kernel packaging (RPM/Debian)
- Security:
- Experience with CVE patching and backporting
- Knowledge of SELinux/AppArmor
- Kernel hardening (FIPS, KSPP)
Preferred Skills
- Experience contributing to open-source kernel projects
- Familiarity with Linux Kernel Mailing List (LKML) workflows
- Exposure to enterprise Linux distributions (RHEL, Ubuntu, SUSE)
- Experience with performance tuning and system optimization at scale
1. Core Programming (C Language)
- Must have strong hands-on experience in C programming
- Comfortable with pointers, memory management, and low-level concepts
2. Kernel Internals Expertise
- Should have worked in at least one subsystem:
- VFS / File Systems
- Memory Management
- Scheduler / Networking
3. Debugging & Crash Analysis
- Experience handling kernel panics
- Hands-on with vmcore analysis tools
4. Security & Patching
- Understanding of CVE fixes and backporting
5. Driver Development
- Experience in writing or maintaining device drivers
6. Performance & Advanced Debugging
- Exposure to eBPF, ftrace, perf
7. Hardware-Level Understanding
- Knowledge of DMA, interrupts, hardware interaction
Soft Skills
- Strong analytical and problem-solving abilities
- Excellent communication skills
- Ability to work independently and in collaborative environments
- Quick learner with adaptability to new technologies
Requirements:
- A minimum of 3+ years of proven experience with Zoho CRM is a mandatory requirement; you must be an expert in the platform.
- Hands-on experience with coding and development within Zoho CRM (e. g., Deluge scripting, API integration, custom functions, and workflow automation).
- Strong understanding of sales processes and how to optimize them using CRM systems.
- Proficient in analyzing data, generating reports, and using data to drive decisions.
- Familiarity with BigQuery or similar data warehousing platforms for processing and analyzing large datasets is advantageous.
- Ability to work cross-functionally with sales, marketing, and customer success teams, as well as technical staff.
- Strong problem-solving skills with a proactive approach to identifying and fixing CRM-related issues.
- Bachelor's degree in a relevant field (e. g., IT, Computer Science, Business Management).
- Preferred: Experience working in a SaaS environment is a plus.
The Knowledge Graph Architect is responsible for designing, developing, and implementing knowledge graph technologies to enhance organizational data understanding and decision-making capabilities. This role involves collaborating with data scientists, engineers, and business stakeholders to integrate complex data into accessible and insightful knowledge graphs.
Work you’ll do
1. Design and develop scalable and efficient knowledge graph architectures.
2. Implement knowledge graph integration with existing data systems and business processes.
3. Lead the ontology design, data modeling, and schema development for knowledge representation.
4. Collaborate with IT and business units to understand data needs and deliver comprehensive knowledge graph solutions.
5. Manage the lifecycle of knowledge graph data, including quality, consistency, and updates.
6. Provide expertise in semantic technologies and machine learning to enhance data interconnectivity and retrieval.
7. Develop and maintain documentation and specifications for system architectures and designs.
8. Stay updated with the latest industry trends in knowledge graph technologies and data management.
The Team
Innovation & Technology anticipates how technology will shape the future and begins building future capabilities and practices today. I&T drives the Ideation, Incubation and scale of hybrid businesses and tech enabled offerings at prioritized offering portfolio and industry interactions.
It drives cultural and capability transformation from solely services – based businesses to hybrid businesses. While others bet on the future, I&T builds it with you.
I&T encompasses many teams—dreamers, designers, builders—and partners with the business to bring a unique POV to deliver services and products for clients.
Qualifications and Experience
Required:
1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
2. 6-10 years of professional experience in data engineering with Proven experience in designing and implementing knowledge graph systems.
3. Strong understanding of semantic web technologies (RDF, SPARQL, GraphQL,OWL, etc.).
4. Experience with graph databases such as Neo4j, Amazon Neptune, or others.
5. Proficiency in programming languages relevant to data management (e.g., Python, Java, Javascript).
6. Excellent analytical and problem-solving abilities.
7. Strong communication and collaboration skills to work effectively across teams.
Preferred:
1. Experience with machine learning and natural language processing.
2. Experience with Industry 4.0 technologies and principles
3. Prior exposure to cloud platforms and services like AWS, Azure, or Google Cloud.
4. Experience with containerization technologies like Docker and Kubernetes
- KSQL
- Data Engineering spectrum (Java/Spark)
- Spark Scala / Kafka Streaming
- Confluent Kafka components
- Basic understanding of Hadoop

A Predictive analytics organization with well funded
What The Role Is
We are looking for an GRC Operations Officer based in Chennai. This is a new role within the growing IT Compliance function, where you will be responsible for handling audits, implementation of information security policies etc,. The successful candidate will be comfortable working with the team on implementing frameworks and providing support for internal and external stakeholders. Reporting to the IT Compliance Officer for our Chennai team, this role is integral to the successful growth of the team as well as wider company performance.
What You’ll Do
- Contribute and assist with continuous improvement of company policies, practices, and procedures
- Review, modify and maintain existing practices and policies to reflect our operations and values within specific industry-standard frameworks like ISO and NIST, among others
- Provide support for internal and third-party audits
- Respond to due diligence and TPRM requests from customers and other interested parties.
- Support internal staff with GRC-related questions and topics
- Develop, maintain and execute awareness programs
- Be a local representative of the company’s GRC group and manage the physical security requirements for the location
- Work independently and prioritize multiple tasks and adapt to needed changes
- Effectively communicate risks to diverse audiences, both in writing and verbally
- Apply a risk-based approach to planning, executing, and reporting on audit engagements and auditing process;
What You’ll Bring
- 2-5 years IT Security, IT risk, IT auditing, and/or IT Compliance experience within a technology company, accounting firm, or others.
- Bachelor's degree or equivalent work experience working in compliance/GRC team.
- Exceptional organisational skills and attention to details.
- Knowledge of applicable domestic and internationally recognized information security management, governance, and compliance principles, practices, laws, rules and regulations;
- Information systems auditing, monitoring, controlling, and assessment process.
Perks & Benefits:
- Competitive base salary
- Equity - every employee is a stakeholder in our enormous upside
- A tech-first company culture driven by entrepreneurial thinking and talent
- A great team working in unison towards the same mission
- Transparency is what our product is built on—and so is our culture
- Generous health insurance benefits for employees and their dependents
- Parental leave.
- Flexible work schedule and work-from-home options
- Flexible PTO
Job Responsibilities:
Support, maintain, and enhance existing and new product functionality for trading software in a real-time, multi-threaded, multi-tier server architecture environment to create high and low level design for concurrent high throughput, low latency software architecture.
- Provide software development plans that meet future needs of clients and markets
- Evolve the new software platform and architecture by introducing new components and integrating them with existing ones
- Perform memory, cpu and resource management
- Analyze stack traces, memory profiles and production incident reports from traders and support teams
- Propose fixes, and enhancements to existing trading systems
- Adhere to release and sprint planning with the Quality Assurance Group and Project Management
- Work on a team building new solutions based on requirements and features
- Attend and participate in daily scrum meetings
Required Skills:
- JavaScript and Python
- Multi-threaded browser and server applications
- Amazon Web Services (AWS)
- REST
Designation: React JS developer
Experience: 1-4 years
Location: Chennai
Must have work experience in React.JS and front end development.
Job description
- Strong programming skills in JavaScript.
- Experience with ReactJ, frontend frameworks like bootstrap, Jquery.
- Good knowledge in HTML5, CSS, CSS3 and Object Oriented concepts
- Experience in working with RESTful APIS.
- Experience with HTTP, HTTPS and WebSockets.
- Knowledge in version control- Git, Subversion.
- Exposure to OWASP standards and guidelines.
- Python coding skills
- Scikit-learn, pandas, tensorflow/keras experience
- Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
- Implementing Machine learning models and pushing it to production
- Creating docker images for ML models, REST API creation in Python
- Additional Skills Compulsory:
- Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
- Additional Skills Compulsory:
- Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
We hiring for Sitecore Developer,
Mandatory Skill:
Sitecore Developer
Years Of Exp : 4+ Yrs
Notice Period : Immediate / Short Joiners
Location : Chennai

