11+ Facility management Jobs in Chennai | Facility management Job openings in Chennai
Apply to 11+ Facility management Jobs in Chennai on CutShort.io. Explore the latest Facility management Job opportunities across top companies like Google, Amazon & Adobe.
Roles and Responsibilities:
- Maintain confidentiality in all aspects of client and staff information
- Interact with staff, clients, suppliers, and visitors
- Open, sort and distribute incoming correspondence.
- Perform general clerical duties to include, but are not limited to, copying, mailing, laminating, and filing.
- Provide support to the Maintenance team on the raising and closing of reactive.
- Utilize the system for raising purchase orders and the subsequent processing of the associated invoices for posting weekly.
- End-to-End Facility Management (Example: Security, Fire & Safety, Building Security, Etc.,)
- Provide support to the Facility Management in maintaining supplier matrices, back-to-work / self-certification documentation and the associated documentation.
- Order and maintain stock about the facilities management service provision
- Attend team meetings and produce subsequent minutes/actions
- Produce hotel, weekend, weekday, tenant, and ad-hoc car park passes as requested.
- Where applicable, meet & greet including organizing appropriate hospitality.
- Where applicable, assist the Centre Receptionist and Administrator.
- ISO Internal Audit Knowledge will be an added advantage.
Office Administrator Qualifications / Skills:
- Managing processes
- Developing standards
- Promoting process improvement
- Tracking budget expenses
- Staffing
- Supervision
- Delegation
- Informing others
- Reporting skills
- Supply management
- Inventory control
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
· Problem-Solving Skills, should be able to convert Idea on Paper to Code
· Bachelor’s degree in computer science or related field, or equivalent professional experience.
· 0 - 4 years of database experience (Oracle SQL, PL/SQL)
· Proficiency in Oracle, with Hands-on experience in database design
· Creation & implementation of data models.
· Strong experience with Oracle functions, procedures, triggers, packages.
· Willing to learn, grasp & quickly adapt needed cutting-edge tools & technologies in shorter timeframe.
· Should be able to write basic Procedures & Functions.
- Candidate should have strong knowledge of Javascript and HTML5 Canvas
- Should have knowledge of HTML5 Canvas Drawing Pad
- Experience in HTML5 Canvas is a must
- Thorough semantics-level understanding of HTML5 Canvas
- Good understanding of JavaScript programming and DOM manipulation
- Understanding and deployment of Responsive Web Design
- Understanding of how web design/dev is related to web Performance
Other Selection Criterias
- Candidate should be available to join immediately
The candidates should have:
· Strong knowledge on Windows and Linux OS
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Senior Engineer – Artificial Intelligence / Computer Vision
(Business Unit – Autonomous Vehicles & Automotive - AVA)
We are seeking an exceptional, experienced senior engineer with deep expertise in Computer Vision, Neural Networks, 3D Scene Understanding and Sensor Data Processing. The expectation is to lead a growing team of engineers to help them build and deliver customized solutions for our clients. A solid engineering as well as team management background is a must.
About MulticoreWare Inc
MulticoreWare Inc is a software and solutions development company with top-notch talent and skill in a variety of micro-architectures, including multi-thread, multi-core, and heterogeneous hardware platforms. It works in sectors including High Performance Computing (HPC), Media & AI Analytics, Video Solutions, Autonomous Vehicle and Automotive software, all of which are rapidly expanding. The Autonomous Vehicles & Automotive business unit specializes in delivering optimized solutions for sophisticated sensor fusion intelligence and the design of algorithms & implementation of software to be deployed on a variety of automotive grade hardware platforms.
Role Responsibilities
● Lead a team to solve the problems in a perception / autonomous-systems scope and turn ideas into code & products
● Drive all technical elements of development, such as project requirements definition, design, implementation, unit testing, integration, and software delivery
● Implementing cutting edge AI solutions on embedded platforms and optimizing them for performance. Hardware architecture aware algorithm design and development
● Contribute to the vision and long-term strategy of the business unit
Required Qualifications (Must Have)
● 3 - 7 years of experience with real world system building, including design, coding (C++/Python) and evaluation/testing (C++/Python)
● Solid experience in 2D / 3D Computer Vision algorithms, Machine Learning and Deep Learning fundamentals – Theory & Practice. Hands-on experience with Deep Learning frameworks like Caffe, TensorFlow or PyTorch
● Expert level knowledge in any of the courses related Signal Data Processing / Autonomous or Robotics software development (Perception, Localization, Prediction, Planning), multi-object tracking, sensor fusion algorithms and familiarity on Kalman filters, particle filters, clustering methods etc.
● Good project management and execution capabilities, as well as good communication and coordination ability
● Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related fields
Preferred Qualifications (Nice-to-Have)
● GPU architecture and CUDA programming experience, as well as knowledge of AI inference optimization using Quantization, Compression (or) Model Pruning
● Track record of research excellence with prior publication on top-tier conferences and journals
We are seeking a Security Program Manager to effectively drive Privacy & Security Programs in collaboration with cross functional teams. You will partner with engineering leadership, product management and development teams to deliver more secure products.
Roles & Responsibilities:
- Work with multiple stakeholders across various departments such as IT, Engineering, Business, Legal, Finance etc to implement controls defined in policies and processes.
- Manage projects with security and audit requirements with internal and external teams and serve as a liaison among all stakeholders.
- Managing penetration tests and security reviews for core applications and APIs.
- Identify, create and guide on privacy and security requirements considering applicable Data Protection Laws and implement them across software modules developed at Netmeds.
- Brainstorm with engineering teams to figure out how privacy and security controls can be applied to Netmeds tech stack.
- Coordination with Infra Teams and Dev Teams on DB and application hardening, standardization of server images / containerization.
- Assess vendors' security posture before onboarding them and after they qualify, review their security posture at a set frequency.
- Manage auditors and ensure compliance for ISO 27001 and other data privacy audits.
- Answer questions or resolve issues reported by the external security researchers & bug bounty hunters.
- Investigate privacy breaches.
- Educate employees on data privacy & security.
- Prioritize security requirements based on their severity of impact and product roadmap.
- Maintain a balance of security and business values across the organisation.
Required Skills:
- Web Application Security, Mobile Application Security, Web Application Firewall, DAST, SAST, Cloud Security (AWS), Docker Security, Manual Penetration Testing.
- Good hands-on experience in handling tools such as vulnerability scanners, Burp suite, patch management, web filtering & WAF.
- Familiar with cloud hosting technologies (ex. AWS, Azure). Understanding of IAM, RBAC, NACLs, and KMS.
- Experience in Log Management, Security Event Correlation, SIEM.
- Must have strong interpersonal skills and should be able to communicate complex ideas seamlessly in written and verbal communication.
Good to Have Skills:
- Online Fraud Prevention.
- Bug Bounty experience.
- Security Operations Center (SOC) management.
- Experience with Amazon AWS services (EC2, S3, VPC, RDS, Cloud watch).
- Experience / Knowledge on tools like Fortify and Nessus.
- Experience in handling logging tools on docker container images (ex. Fluentd).
As a Technical member of the company, you should be comfortable around both front-end and back-end coding languages, development frameworks, and third-party libraries. You should also be a team player with a knack for visual design and utility.





