

Gradera AI Technologies
https://gradera.aiJobs at Gradera AI Technologies
Role & Responsibilities
· Collect, clean, and analyze large structured and unstructured datasets from multiple internal and external sources
· Conduct thorough exploratory data analysis (EDA) to understand data distributions, relationships, outliers, and missing value patterns
· Profile and audit datasets to assess data quality, completeness, consistency, and fitness for modeling
· Investigate and document data lineage — understanding where data originates, how it flows, and how it transforms across systems
· Identify and resolve data anomalies, inconsistencies, and integrity issues in collaboration with data engineering teams
· Develop a deep understanding of the business domain and the underlying data that represents it — including what each field means, how it is captured, and what its limitations are
· Translate raw, messy, real-world data into clean, well-understood analytical datasets ready for modeling and reporting
· Apply statistical techniques such as correlation analysis, hypothesis testing, variance analysis, and distribution fitting to extract meaningful signals from noise
· Build and deploy machine learning models including regression, classification, clustering, NLP, and time-series analysis
· Design, evaluate, and analyze A/B experiments and controlled tests using causal inference techniques
· Develop data-driven recommendations backed by rigorous statistical reasoning
· Write clean, production-ready code in Python or R
· Collaborate with data engineers to build reliable data pipelines and feature stores
· Deploy and monitor ML models using MLOps best practices on cloud infrastructure
· Build dashboards and self-serve analytics tools to support stakeholder decision-making
Data Understanding & Analysis Skills
· Strong ability to interrogate unfamiliar datasets and quickly develop a working understanding of their structure, semantics, and quirks
· Experience working with messy, incomplete, or poorly documented real-world data
· Skilled in identifying hidden patterns, trends, seasonality, and anomalies through visual and statistical exploration
· Ability to ask the right questions about data — challenging assumptions, validating sources, and understanding the context in which data was collected
· Proficiency in data profiling, descriptive statistics, and summary reporting to communicate the shape and health of a dataset
· Experience creating data dictionaries, documentation, and data quality reports to support team-wide data understanding
· Comfort working across structured (relational tables), semi-structured (JSON, XML), and unstructured (text, logs, sensor streams) data formats
Technical Skills Required
· Proficiency in Python (pandas, NumPy, scikit-learn, PyTorch or TensorFlow) and/or R
· Strong SQL skills with hands-on experience in DB2 and SQL Server
· Experience with Databricks for large-scale data processing, feature engineering, and model training
· Familiarity with cloud platforms: Azure or AWS
· Experience with data warehouses and big data platforms (Databricks, Snowflake, or Redshift)
· Knowledge of MLOps tools such as MLflow, Kubeflow, or Airflow
· Experience with streaming data technologies such as Kafka or Spark
· Solid foundation in probability, statistics, linear algebra, and experimental design
Nice to Have
· Experience with deep learning, NLP, computer vision, or Bayesian methods
· Familiarity with real-time or streaming data pipelines
· Open-source contributions or published research
Data Engineer
Overview
We are seeking skilled Data Engineers to join our Data & Digital Twin Foundation team. You will design, build, and maintain data pipelines that power digital twin platforms, real-time operational systems, and AI/ML workloads. Working closely with data architects, simulation engineers, and ML teams, you will transform raw operational data into high-quality, governed datasets that drive intelligent decision-making.
Our core data platform stack includes:
Data Platform & Lakehouse
- Databricks as the single point of truth for all data
- Realtime Data Pipelines implemented using Kafka for data ingestion.
- Databricks SQL for analytical queries
- Unity Catalog for metadata management and governance
- Terradata for data warehouse and business intelligence.
Stream & Event Processing
- Apache Kafka for real-time event ingestion
- Structured Streaming for continuous data processing
- Delta Live Tables for declarative, quality-enforced pipelines
Data Quality
- Delta Live Tables expectations for data validation
- Data profiling and anomaly detection
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks, PySpark, and Delta Lake
- Build real-time and batch data ingestion pipelines from diverse operational systems using high-performance Kafka data pipelines.
- Implement data transformations that serve digital twin platforms and operational analytics
- Integrate Kafka event streams with Databricks for real-time operational state updates
- Implement data quality checks using Delta Live Tables expectations
- Ensure data governance compliance through Unity Catalog (lineage, access control, metadata)
- Optimize pipeline performance, reliability, and cost efficiency
- Write clean, well-documented, and testable code following engineering best practices
- Collaborate with ML engineers to deliver feature-engineered datasets
- Participate in code reviews, knowledge sharing, and continuous improvement initiatives
- Support production data systems through monitoring, troubleshooting, and incident resolution.
- Build business data warehouse solutions using Terradata for business intelligence.
Preferred Qualifications
- 7+ years of hands-on data engineering experience
- Track record of building and maintaining production-grade data pipelines
- Experience with Delta Live Tables for declarative pipeline development
- Experience working in agile, cross-functional teams
- Familiarity with time-series data patterns and operational data modelling
Highly Desirable
- Experience building data pipelines for digital twin or simulation platforms
- Familiarity with operational state modeling for real-time systems
- Exposure to physics-informed or time-series ML feature engineering
- Experience working with distributed, multidisciplinary teams
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
Lead API Engineer - IND
Engineering - Hyderabad, Telangana
About Gradera
At Gradera, we’re defining a new category of enterprise transformation called Software-Orchestrated Services™ (SoS™) — a governed blend of human expertise and digital intelligence that transforms how enterprises operate.
Our mission is to build intelligent digital workers that augment teams and automate work across the value chain, helping organizations become more efficient, agile, and resilient.
We don’t believe in one-size-fits-all solutions. Every engagement is tailored to the unique needs of each enterprise — grounded in governance, security, and reliability. By aligning technology with strategy, we empower our clients to achieve measurable outcomes and lead with confidence in a rapidly evolving digital landscape.
Lead – API Engineer
Overview
Key Responsibilities
•Provide technical leadership, mentorship, and guidance to the API engineering team
•Lead the design, implementation, and evolution of API architecture and backend services
•Champion API-first development, ensuring clear contracts and comprehensive documentation (OpenAPI/Swagger)
•Lead the implementations of APIs conforming to all NFRs including security, scalability, and performance.
•Ensure robust authentication, authorization, and data protection for all endpoints
•Drive adoption of best practices in API design, versioning, and integration
•Optimize API performance for low-latency, high-frequency interactions
•Oversee observability, monitoring, and alerting for all APIs and backend services
•Collaborate closely with product, UI, and runtime teams to deliver integrated solutions
•Collaborate with the SDET to ensure comprehensive test coverage and effective test data creation
•Lead the adoption of modern API tools, frameworks, and best practices
•Ensure engineering rigor, code quality, and documentation standards are met
•Facilitate clear communication, knowledge sharing, and effective documentation within the team
•Support team growth through coaching, feedback, and skills development
Core Qualities & Skills
•Proven experience leading API or backend engineering teams and delivering complex API projects
•Deep expertise in API architecture, design, and implementation (RESTful, GraphQL, gRPC etc.)
•Strong programming skills in relevant backend languages (e.g., Node.js, Python, Java, Go, etc.)
•Experience with API security, authentication, and authorization (OAuth, JWT, RBAC, PKCE, define fine-grained access controls using ReBAC)
•Experience with API documentation and standards (OpenAPI/Swagger, Use Case, JSON Schemas, etc.)
•Familiarity with data serialization with Protofbuf
•Experience with some API gateway is a must
•Strong understanding of performance optimization, scalability, and reliability for APIs
•Experience with observability, monitoring, and troubleshooting for backend services
•Strong collaboration and alignment skills across disciplines
•Willingness to learn, share knowledge, and adapt to evolving technologies
•System design skills and awareness of technical debt and tradeoffs
•Excellent communication, documentation, and stakeholder management abilities
•Comfort with ambiguity, discovery, and rapid change
•Commitment to engineering excellence, security, and responsible practices
Preferred Qualifications
•7+ years of hands-on API or backend engineering experience, with 2+ years in a technical leadership role
•Track record of architecting and delivering scalable, reliable API systems
•Experience with modern development practices (CI/CD, automated testing, code reviews)
•Demonstrated ability to mentor and grow engineers
•Experience working in cross-functional, agile teams
Highly Desirable
•Experience with API gateways (e.g., Envoy, Kong, Apigee) and service mesh patterns (e.g., Istio)
•Experience with event-driven and streaming architectures (pub/sub, callbacks, Kafka, etc.)
•Experience with cloud-native infrastructure and automated provisioning (e.g., Terraform)
•Experience building developer portals and self-service onboarding for APIs
•Experience with API Gateways (Apache APISIX, KrakenD, …)
•Ability to perform API Domain Modelling with a strong product orientation, aligning technical design with user and business needs
•Experience thriving in fast-paced, ambiguous environments and balancing rapid delivery with technical excellence
•Experience leading or working with distributed, multidisciplinary teams
Success Metrics
•API response time (median and p95)
•Uptime and reliability of critical services
•Test coverage for API and backend logic
•Developer velocity and time from API design to production
•Time to First Prototype (TTFP)
•Integration lead time for API consumers
•Technical debt reduction and architectural alignment
•Team growth, engagement, and retention
•Stakeholder satisfaction and cross-team collaboration
Lead Software Development Engineer in Test (SDET) – UI, focused on driving UI test automation strategy, framework design, and quality leadership for modern web applications. The role is hands-on and cross-functional, emphasizing scalable, maintainable, and reliable Playwright-based UI automation and a strong culture of quality across the SDLC.
Role Overview
- Acts as a technical leader and mentor for the UI SDET/QA automation team.
- Owns the UI automation vision and framework architecture, primarily using Playwright with TypeScript.
- Partners closely with Product, Engineering, UX, and DevOps to ensure high-quality frontend delivery.
- Champions continuous improvement, engineering excellence, and rapid feedback.
Core Technology Stack
- Playwright – primary UI automation framework
- TypeScript – main automation language
- Jest – unit testing and utilities
- Docker & Kubernetes – containerized test environments
- GitHub Actions – CI/CD integration
- Karate – API and E2E testing support
Key Responsibilities
Technical Leadership & Strategy
- Lead the design, implementation, and evolution of UI automation frameworks.
- Enforce best practices such as Page Object Model (POM), data-driven testing, and atomic test design.
- Reduce flakiness, improve execution speed, and ensure meaningful assertions.
Quality & Process Ownership
- Own and execute comprehensive UI test plans aligned with BDD practices.
- Establish and maintain robust regression testing.
- Drive root cause analysis and continuous improvement for UI defects.
- Incorporate feedback from test outcomes and production issues to improve coverage.
Culture & Collaboration
- Promote a TDD mindset and shared ownership of test automation among engineers.
- Champion a culture of quality, learning, and continuous improvement.
- Support team growth through coaching, feedback, and skills development.
- Ensure strong documentation, code quality, and knowledge sharing.
Core Skills & Qualities
- Strong expertise in Playwright, POM, and UI automation best practices.
- Advanced knowledge of TypeScript, waits, locators, and test stability techniques.
- Experience with CI/CD pipelines, test reporting, and analytics.
- Strong collaboration, communication, and stakeholder management skills.
- Awareness of technical debt, system design trade-offs, and test strategy balance.
- Comfort with ambiguity, fast change, and evolving technologies.
Preferred & Highly Desirable Experience
Preferred
- 5+ years of UI SDET/QA/software engineering experience.
- 2+ years in a technical leadership role.
- Proven success delivering scalable and reliable UI automation systems.
- Experience mentoring engineers in agile, cross-functional teams.
Highly Desirable
- Testing non-deterministic systems, including AI/ML or GenAI-driven UIs.
- Using AI to accelerate testing and SDLC processes.
- Experience with Docker/Kubernetes for test environments.
- Understanding of the Test Pyramid and balancing UI, integration, and unit tests.
- Experience in distributed or multidisciplinary teams.
We are seeking a high-caliber Firmware Lead to join our Engineering team at Gradera. In this role, you will be the technical anchor for the firmware squad, responsible for translating high-level architectural visions into robust, executable low-level designs (LLD). You will lead the design and development of firmware solutions on NXP-based hardware platforms, ensuring seamless real-time data acquisition and integration with cloud-based Machine Learning (ML) platforms. We are looking for a seasoned expert who can work independently without any supervision, taking full ownership of the firmware lifecycle from hardware abstraction to cloud-edge synchronization.
Our Core Tech Stack
Embedded & OS
- NXP SoCs/MCUs: i.MX, LPC, and Kinetis series.
- Yocto Project: Custom layers, recipes, BitBake, and kernel configuration for Linux.
- RTOS Platforms: Deterministic performance, task scheduling, and interrupt handling.
Development & Integration
- Languages: Mandatory proficiency in C/C++ and C# (.NET on embedded targets/IoT).
- Communication: MQTT, WebSockets, CAN, UART, SPI, and I2C.
- Cloud & ML: Azure IoT Hub, AWS IoT Core, and data streaming via Kafka or Kinesis.
Infrastructure & Security
- Security: Secure boot, encryption, and device authentication.
- DevOps: Containerization (Docker) and CI/CD for firmware.
Key Responsibilities
- Architectural Ownership: Convert high-level blueprints into detailed technical designs for NXP-based systems, ensuring optimal performance across hardware and software layers.
- Autonomous Execution: Lead the end-to-end development of firmware modules, making critical technical decisions and resolving complex blockers without supervision.
- ML Pipeline Leadership: Collaborate with Data Engineering and ML teams to architect streaming and batch ingestion pipelines, ensuring data is correctly structured for ML training.
- Cloud-Edge Synchronization: Design secure and reliable transmission protocols for device-to-cloud communication, focusing on edge-to-cloud integration.
- Standards Enforcement: Act as the guardian of engineering excellence, implementing security best practices (secure boot, TLS) and ensuring high code quality.
- Technical Mentoring: Act as a technical beacon for the squad, conducting rigorous code reviews and mentoring senior engineers in Yocto Linux and RTOS concepts.
- Strategic Troubleshooting: Lead the debugging of critical firmware issues across hardware and software layers, including OTA update implementations.
Preferred Qualifications
- 8 to 10 years of professional experience in embedded firmware development.
- Proven ability to work independently and lead technical squads in a fast-paced environment.
- Expert-level mastery of the Yocto Project and RTOS constraints.
- Deep proficiency in C/C++ and C# for embedded systems.
- Demonstrated track record of delivering low-level designs for edge-to-cloud ML systems.
Highly Desirable
- Industry Experience: Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is highly regarded.
- Experience with Edge AI / TinyML and industrial protocols (Modbus, OPC-UA).
- Knowledge of Cybersecurity standards for secure device provisioning.
Technical Lead – Full Stack
Overview
We are seeking a high-caliber Tech Lead – Full Stack to join our Digital Twin Platform and Simulation team. In this role, you will translate high-level architectural visions into robust, executable low-level designs (LLD). You will be the technical anchor for the squad by providing deep technical guidance and maintaining rigorous standards you will enable the team to deliver scalable interfaces and services that power our digital twin ecosystem.
Our Core Full Stack Stack
Front-End Engineering
- React + TypeScript (with Next.js), ShadCN/UI, Tailwind CSS for building complex, state-heavy interactive dashboards.
- JavaScript (ES6+) and TypeScript for type-safe management of simulation data
- State management (Redux/Zustand) optimized for high-frequency data updates
· Experience in ESRI ArcGIS Map usage in UI is a big plus.
Back-End & Microservices
- Java and Spring Boot for building high-scale, resilient microservices
- REST APIs for seamless communication between services and front-end consumers
- Microservices architecture and system integration patterns
- Experience designing, building, and integrating RESTful or GraphQL APIs, Protobuf with gRPC and gRPC-Web
Engineering Excellence
- GitHub for version control and rigorous Code Reviews
- CI/CD pipelines, Docker, and Kubernetes for cloud-native deployment
Key Responsibilities
- Low-Level Design (LLD): Convert high-level architectural blueprints into detailed technical designs, including class diagrams, sequence diagrams, and API specifications.
- Technical Mentoring: Lead and coach the engineering team through pair programming, technical 1-on-1s, and hands-on guidance to elevate overall team competency.
- Standards Enforcement: Ensure all code adheres to the defined engineering standards, SOLID principles, and design patterns established by the Architects.
- Code Quality & Review: Conduct comprehensive code reviews to maintain high quality, ensuring the team delivers clean, testable, and maintainable code.
- Technical Anchoring: Serve as the "go-to" expert for the squad to resolve complex technical blockers and provide clarity on implementation details.
- Hands-on Development: Direct implementation of critical and complex modules, setting the benchmark for performance and reliability.
- System Integration: Oversee the technical execution of integrations between full-stack applications and core data layers (Databricks/Neo4j).
- Delivery Governance: Ensure the squad meets sprint objectives by maintaining a high standard of execution and managing technical debt effectively.
Preferred Qualifications
- 8 to 10 years of professional experience in full-stack software development.
- Proven track record in a Tech Lead capacity, with strong experience in creating Low-Level Designs.
- Expert-level proficiency in Java, Spring Boot, and React.
- Deep understanding of Microservices architecture and RESTful API design.
- Familiarity with ShadCN/UI familiarity along with Material UI, Storybook, and/or similar tools
- Demonstrated ability to mentor engineering teams and drive technical excellence in an Agile environment.
- Experience working in the India tech region, preferably within high-growth product engineering teams.
Highly Desirable
- Experience building platforms for Digital Twin, IoT, or Simulation environments.
- Familiarity with visualizing complex networks or real-time operational data.
- Knowledge of performance tuning for both front-end rendering and back-end processing.
- Experience leading teams in an Agile/Scrum environment.
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus.
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
Data Quality Engineer
Engineering - Hyderabad, Telangana
About Gradera — Digital Twin & Physical AI Platform
At Gradera, we are building a next-generation Digital Twin and Physical AI platform that enables enterprises to model, simulate, and optimize complex real-world systems. Our work brings together strategy, architecture, data, simulation, and experience design to power decision-making across large-scale operational environments such as manufacturing, logistics, and supply chain networks.
This platform-led initiative applies AI-native execution, advanced simulation, and governed orchestration to help organizations test scenarios, predict outcomes, and continuously improve performance. We operate with an enterprise-first mindset prioritizing reliability, transparency, and measurable business impact as we build intelligent systems that scale beyond a single industry or use case.
Data Quality Engineer
Overview
We are seeking a detail-oriented Data Quality Engineer to ensure the integrity, accuracy, and reliability of data powering our digital twin and AI platforms. You will design and implement data quality frameworks, build automated validation pipelines, and establish quality metrics that enable trusted, simulation-ready data products. This role is critical to ensuring that operational decisions and ML models are built on a foundation of high-quality, governed data.
Our core data quality stack includes:
Data Quality Frameworks
- Delta Live Tables expectations for declarative quality enforcement
- Great Expectations for comprehensive data validation
- Databricks data profiling and quality monitoring
Platform & Tools
- Databricks SQL and PySpark for quality checks at scale
- Unity Catalog for lineage tracking and governance compliance
- Python for custom validation logic and anomaly detection
Observability
- Quality metrics dashboards and alerting
- Data profiling and statistical analysis
- Anomaly detection and drift monitoring
Key Responsibilities
- Design and implement data quality frameworks using Delta Live Tables expectations and Great Expectations
- Build automated data validation pipelines that enforce quality standards at ingestion and transformation stages
- Develop data profiling processes to understand data distributions, patterns, and anomalies
- Define and track data quality metrics (completeness, accuracy, consistency, timeliness, validity)
- Implement anomaly detection mechanisms to identify data drift and quality degradation
- Create quality dashboards and alerting systems for proactive issue identification
- Collaborate with data engineers to embed quality checks into ETL/ELT pipelines
- Partner with data architects to establish data quality standards and governance policies
- Investigate and perform root cause analysis for data quality issues
- Document data quality rules, thresholds, and remediation procedures
- Support data certification processes for simulation-ready and ML-ready datasets
- Drive continuous improvement in data quality practices and tooling
Preferred Qualifications
- 6+ years of experience in data engineering or data quality roles, with 3+ years focused on data quality
- Track record of implementing enterprise-scale data quality frameworks
- Experience with Lakehouse architectures (Delta Lake, Iceberg)
- Familiarity with real-time data quality monitoring for streaming pipelines
- Experience working in agile, cross-functional teams
Highly Desirable
- Experience with data quality for digital twin or simulation platforms
- Familiarity with operational state data validation and temporal consistency checks
- Experience with graph data quality validation (Neo4j or similar)
- Exposure to ML data quality (feature validation, training data quality)
- Experience with data observability platforms
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
Our Core Full Stack
Front-End Engineering
- React for building complex, state-heavy interactive dashboards
- JavaScript (ES6+) and TypeScript for type-safe management of simulation data
- State management (Redux/Zustand) optimized for high-frequency data updates
Back-End & Microservices
- .Net and Spring Boot for building high-scale, resilient microservices
- REST APIs for seamless communication between services and front-end consumers
- Microservices architecture and system integration patterns
Engineering Excellence
- GitHub for version control and rigorous Code Reviews
- CI/CD pipelines, Docker, and Kubernetes for cloud-native deployment
Key Responsibilities
- Low-Level Design (LLD): Convert high-level architectural blueprints into detailed technical designs, including class diagrams, sequence diagrams, and API specifications.
- Technical Mentoring: Lead and coach the engineering team through pair programming, technical 1-on-1s, and hands-on guidance to elevate overall team competency.
- Standards Enforcement: Ensure all code adheres to the defined engineering standards, SOLID principles, and design patterns established by the Architects.
- Code Quality & Review: Conduct comprehensive code reviews to maintain high quality, ensuring the team delivers clean, testable, and maintainable code.
- Technical Anchoring: Serve as the "go-to" expert for the squad to resolve complex technical blockers and provide clarity on implementation details.
- Hands-on Development: Direct implementation of critical and complex modules, setting the benchmark for performance and reliability.
- System Integration: Oversee the technical execution of integrations between full-stack applications and core data layers (Databricks/Neo4j).
- Delivery Governance: Ensure the squad meets sprint objectives by maintaining a high standard of execution and managing technical debt effectively.
Preferred Qualifications
- 8 to 10 years of professional experience in full-stack software development.
- Proven track record in a Tech Lead capacity, with strong experience in creating Low-Level Designs.
- Expert-level proficiency in .Net, Spring Boot, and React.
- Deep understanding of Microservices architecture and RESTful API design.
- Demonstrated ability to mentor engineering teams and drive technical excellence in an Agile environment.
- Experience working in the India tech region, preferably within high-growth product engineering teams.
Highly Desirable
- Experience building platforms for Digital Twin, IoT, or Simulation environments.
- Familiarity with visualizing complex networks or real-time operational data.
- Knowledge of performance tuning for both front-end rendering and back-end processing.
- Experience leading teams in an Agile/Scrum environment.
- Exposure to industrial domains such as Manufacturing, Logistics, or Transportation is a plus.
Location: Hyderabad, Telangana
Department: Engineering
Employment Type: Full-Time
Similar companies
About the company
Jobs
3
About the company
Jobs
4
About the company
At LearnTube, we're reimagining how the world learns making education accessible, affordable, and outcome-driven using Generative AI. Our platform turns scattered internet content into structured, personalised learning journeys using:
- AI-powered tutors that teach live, solve doubts instantly, and give real-time feedback
- Frictionless delivery via WhatsApp, mobile, and web
- Trusted by 2.2 million learners across 64 countries
Founded by Shronit Ladhani and Gargi Ruparelia, both second-time entrepreneurs and ed-tech builders:
- Shronit is a TEDx speaker and an outspoken advocate for disrupting traditional learning systems.
- Gargi is one of the Top Women in AI in India, recognised by the government, and leads our AI and scalability roadmap.
Together, they bring deep product thinking, bold storytelling, and executional clarity to LearnTube’s vision. LearnTube is proudly backed by Google as part of their 2024 AI First Accelerator, giving us access to cutting-edge tech, mentorship, and cloud credits.
Jobs
1
About the company
We are hiring for multiple clients
Jobs
1
About the company
Jobs
9
About the company
We are a founder-led, early-stage startup from Russia building a visual AI workflow platform for image and video creation, automation, and scalable media processing. We are currently in the pre-launch phase and are recruiting a small core team to focus on product development, back-end, front-end, DevOps, and AI content workflows.
Jobs
1
About the company
Jobs
1
About the company
Jobs
11




