
About InspireOne TMI IBM TACK
About
Connect with the team
Similar jobs
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Ctruh is looking for a deeply technical, hands-on Senior Backend Engineer - someone who can architect systems in the morning, write production-grade code in the afternoon, and scale infrastructure to power millions of 3D and XR experiences.
You will own the entire backend ecosystem: architecture, APIs, databases, infrastructure, performance, and reliability. This is not an oversight or management-only position - it is a builder’s role where you design, code, deploy, and optimize mission-critical systems.
You will make foundational decisions, build distributed systems that handle massive 3D processing workloads, and lead backend engineering direction as the platform scales globally.
What You’ll Build
1. System Architecture & Design
- Architect highly scalable backend systems from the ground up
- Define technology choices: frameworks, databases, queues, caching layers
- Evaluate microservices vs monoliths based on product stage
- Design REST, GraphQL, and real-time WebSocket APIs
- Build event-driven systems for asynchronous processing
- Architect multi-tenant systems with strict data isolation
- Maintain architectural documentation and technical specifications
2. Core Backend Services
- Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
- Create 3D asset processing pipelines for uploads, conversions, and optimization
- Develop distributed job workers for CPU/GPU-intensive tasks
- Build authentication and authorization systems (RBAC)
- Implement billing, subscription, and usage metering
- Build secure webhook systems and third-party integration APIs
- Create real-time collaboration features via WebSockets or SSE
3. Data Architecture & Databases
- Design scalable schemas for 3D metadata, XR sessions, and analytics
- Model complex product catalogs with variants and hierarchies
- Implement Redis-based caching strategies
- Build search and indexing systems (Elasticsearch, Algolia)
- Architect ETL pipelines and data warehouses
- Implement sharding, partitioning, and replication strategies
- Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
- Build systems designed for 10x–100x traffic growth
- Implement load balancing, autoscaling, and distributed processing
- Optimize API response times and database performance
- Implement global CDN delivery for heavy 3D assets
- Build rate limiting, throttling, and backpressure mechanisms
- Optimize storage and retrieval of large 3D files
- Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
- Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
- Build CI/CD pipelines for automated deployments and rollbacks
- Use Infrastructure-as-Code tools (Terraform, CloudFormation)
- Set up monitoring, logging, and alerting systems
- Use Docker and Kubernetes for container orchestration
- Implement security best practices for data, networks, and secrets
- Define disaster recovery and business continuity plans
6. Integration & APIs
- Build integrations with Shopify, WooCommerce, Magento
- Design webhook systems for real-time events
- Build SDKs, client libraries, and developer tools
- Integrate payment gateways (Stripe, Razorpay)
- Implement SSO and OAuth for enterprise customers
- Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
- Build analytics pipelines for engagement, conversions, and XR performance
- Process high-volume event streams at scale
- Build data warehouses for BI and reporting
- Develop real-time dashboards and insights systems
- Implement analytics export pipelines and integrations
- Enable A/B testing and experimentation frameworks
- Build personalization and recommendation systems
Technical Stack
Backend Languages & Frameworks
- Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
- Secondary: Go, Java/Kotlin (Spring)
- APIs: REST, GraphQL, gRPC
Databases & Storage
- SQL: PostgreSQL, MySQL
- NoSQL: MongoDB, DynamoDB
- Caching: Redis, Memcached
- Search: Elasticsearch, Algolia
- Storage/CDN: AWS S3, CloudFront
- Queues: Kafka, RabbitMQ, AWS SQS
Cloud & Infrastructure
- Cloud: AWS (primary), GCP/Azure (nice to have)
- Compute: EC2, Lambda, ECS, EKS
- Infrastructure: Terraform, CloudFormation
- CI/CD: GitHub Actions, Jenkins, CircleCI
- Containers: Docker, Kubernetes
Monitoring & Operations
- Monitoring: Datadog, New Relic, CloudWatch
- Logging: ELK Stack, CloudWatch Logs
- Error Tracking: Sentry, Rollbar
- APM tools
Security & Authentication
- Authentication: JWT, OAuth 2.0, SAML
- Secrets Management: AWS Secrets Manager, Vault
- Security: Encryption at rest and in transit, TLS/SSL, IAM
What We’re Looking For
Must-Haves
- 5+ years of backend engineering experience with strong system design expertise
- Experience building scalable systems from scratch
- Expert-level proficiency in at least one backend stack (Node, Python, Go, or Java)
- Deep understanding of distributed systems and microservices
- Strong SQL and NoSQL design skills with performance optimization
- Hands-on AWS cloud experience
- Ability to write high-quality production code daily
- Experience building and scaling RESTful APIs
- Strong understanding of caching, sharding, and horizontal scaling
- Solid security and best-practice implementation experience
- Proven leadership and mentoring capability
Highly Desirable
- Experience with large file processing such as 3D, video, or images
- Background in SaaS, multi-tenancy, or e-commerce
- Experience with real-time systems such as WebSockets or streams
- Knowledge of ML or AI infrastructure
- Experience with high-availability systems and disaster recovery planning
- Familiarity with GraphQL, gRPC, and event-driven architectures
- DevOps or infrastructure engineering background
- Experience with XR, AR, or VR backend systems
- Open-source contributions or technical writing
- Prior senior technical leadership experience
Technical Challenges You’ll Solve
- Designing large-scale 3D asset processing pipelines
- Serving XR content globally with ultra-low latency
- Scaling from thousands to millions of daily requests
- Efficiently handling CPU/GPU-heavy workloads
- Architecting multi-tenancy with complete data isolation
- Managing billions of analytics events at scale
- Building future-proof APIs with backward compatibility
Why Ctruh
- Architectural Ownership: Build foundational systems from scratch
- Deep Technical Work: Solve distributed systems and scaling challenges
- Hands-On Impact: Design and code mission-critical infrastructure
- Diverse Problems: APIs, infrastructure, data, ML, XR, asset processing
- Massive Scale Opportunity: Build systems for exponential growth
- Modern stack and best practices
- Product Impact: Your architecture directly powers millions of users
- Leadership Opportunity: Shape engineering culture and direction
- Learning Environment: Stay at the forefront of backend engineering
- Backed by AWS, Microsoft, and Google
Location & Work Culture
- Location: Bengaluru
- Schedule: 6 days a week (5 days in office, Saturdays WFH)
- Culture: Builder mindset, strong ownership, technical excellence
- Team: Small, highly skilled backend and infrastructure team
- Resources: AWS credits, latest tooling, learning budget
The Ideal Candidate
- You are a backend engineer first and architect second - someone who still enjoys writing code, debugging complex issues, and solving scaling problems hands-on.
- You have built systems from the ground up and experienced the challenges that come with scaling them. You think in systems, evaluate trade-offs clearly, and design architectures that are practical, resilient, and future-proof.
- You are comfortable discussing microservices vs monoliths, choosing the right database for a use case, designing APIs, and introducing caching or queues when appropriate. You have made architectural decisions, optimized them later, and learned from the process.
- You stay close to the code. You pair program, review pull requests, jump into production incidents, and ship features alongside your team. You enjoy designing high-level architecture and then implementing the most critical components yourself.
- You balance ambition with pragmatism. You know when to use managed services, when to build custom solutions, and how to ship iteratively while maintaining system stability.
- Most importantly, you are a builder - someone excited to architect the backend foundations of a fast-growing XR platform, optimize performance for massive 3D workloads, and design infrastructure that supports global, real-time immersive experiences.
Job description
· English is Must – Written and Verbal
· Fresher’s can also apply
· Excellent Geographical knowledge is must
· English written and verbal is Must (Regional Language will be Plus Point)
· Package calculations and itinerary preparation.
· Pitching the packages to customer and ability to sell packages over phone and internet.
· Should have good knowledge of Domestic or international destinations.
· Hotel bookings, flight bookings, making land operator arrangements and negotiating deals with hotels.
· Converting Leads in to business.
· Preparing Tour costing
· Communication to the clients
· Handing Entire Tour Sales and Operations
· Designing Attractive Package Itineraries
· Understanding Client's Package Requirements
· Presenting, Convincing & Selling Tour Package to Clients
· Sales Follow up
Working Hours: 10 AM – 07 PM
Working Days: Monday to Saturday
Location: B-117 DDA SHEDS OKHALA PHASE 1
- Android Developer: Vadodara
- Basic Requirements:
- Education: BCA, MCA, Msc IT, Bsc IT or any equivalent qualification
- Work Experience: 3-6 year of experience in same field
- Good Communication skill (Preferred)
- Can work under pressure
- Meet deadlines
- Responsibilities:
- Reviewing (if already developed) current system workflow and database design.
- Analyzing the needs of system and producing detailed specification document
- Create a step by step flow chart or psudocode for computing systems that shows how program code must be written in order to work properly.
- Develop a functional modules according as required under deadline.
- Integrating 3rd party tools wherever required.
- Perform and document unit testing for developed functions.
- System maintenance by monitoring and correcting software defects.
- Continuously updating technical knowledge and skills as per industry standards.
- Contribute to technical writers to create users documents.
- All tasks must be carried out at the highest standards.
- Function/ Skills
- Strong experience on any of the modern Android Studio, Android SDK
- Ability to create Data Structures and Algorithms.
- Knowledge of Flutter will be an added advantage
- Working over cloud computing platform like Amazon Web Service (AWS), Google Cloud Platform (GCP), Microsoft Azure
- Strong designing / problem solving skills.
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.

Loc: Chennai, Bangalore,Pune,JaipurEXP: 5 yrs to 8 yrs
- Implement best practices for the engineering team across code hygiene, overall architecture design, testing, and deployment activities
- Drive technical decisions for building data pipelines, data lakes, and analyst access.
- Act as a leader within the engineering team, providing support and mentorship for teammates across functions
- Bachelor’s Degree in Computer Science or equivalent job experience
- Experienced developer in large data environments
- Experience using Git productively in a team environment
- Experience with Docker
- Experience with Amazon Web Services
- Ability to sit with business or technical SMEs to listen, learn and propose technical solutions to business problems
· Experience using and adapting to new technologies
· Take and understand business requirements and goals
· Work collaboratively with project managers and stakeholders to make sure that all aspects of the project are delivered as planned
· Strong SQL skills with MySQL or PostgreSQL
- Experience with non-relational databases and their role in web architectures desired
Knowledge and Experience:
- Good experience with Elixir and functional programming a plus
- Several years of python experience
- Excellent analytical and problem-solving skills
- Excellent organizational skills
Proven verbal and written cross-department and customer communication skills
with the engineering team to strategize and execute the development of data products
● Execute analytical experiments methodically to help solve various problems and make a true impact across
various domains and industries
NLP ENGINEER at KARZA TECHNOLOGIES
● Identify relevant data sources and sets to mine for client business needs, and collect large structured and
unstructured datasets and variables
● Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve
models, and clean and validate data for uniformity and accuracy
● Analyze data for trends and patterns, and Interpret data with a clear objective in mind
● Implement analytical models into production by collaborating with software developers and machine
learning engineers
● Communicate analytic solutions to stakeholders and implement improvements as needed to operational
systems
What you need to work with us:
● Good understanding of data structures, algorithms, and the first principles of mathematics.
● Proficient in Python and using packages like NLTK, Numpy, Pandas
● Should have worked on deep learning frameworks (like Tensorflow, Keras, PyTorch, etc)
● Hands-on experience in Natural Language Processing, Sequence, and RNN Based models
● Mathematical intuition of ML and DL algorithms
● Should be able to perform thorough model evaluation by creating hypotheses on the basis of statistical
analyses
● Should be comfortable in going through open-source code and reading research papers.
● Should be curious or thoughtful enough to answer the “WHYs” pertaining to the most cherished
observations, thumb rules, and ideas across the data science community.
Qualification and Experience Required:
● 1 - 4 years of relevant experience
● Bachelor/ Master’s degree in computer science / Computer Engineering / Information Technology
- Provide L1 support including desktops, laptops and printers and other IT related accessories
- Carry out initial troubleshooting and resolve the issue; if not escalate to L2 support or corresponding OEM
- Raise ticket within supplied ticketing system and update and close with end user confirmation
- Install and configure Windows Operating Systems and application software Like MS-Office, Outlook, Antivirus etc
- Install, configure and upgrade security software (e.g. Antivirus programs) like McAfee or Kaspersky software
- Install, configure and troubleshoot various PC Hardware Components
- Configure and troubleshoot of LAN and Wi-Fi setup
- Install and troubleshoot Printer, Scanner and Projector
About you:
• 1-2 years of experience as a Desktop Engineer from any industry
• Hands on experience in troubleshooting in Desktop, Laptop and PC Hardware
• Strong experience in Anti-Virus software installation and Management
• Good experience in configuration and troubleshooting of Network Devices
• Basic knowledge of Server administration and Server hardware
• Good working knowledge on MS-Office
• Good communication and problem-solving skill
• Any bachelor’s degree with IT background would be a plus
About Company
Couture AI Platform provides Pluggable Building Blocks to entire
AI stack, which are used to build and productionize varied
enterprise ML and deep learning use cases. It has enabled some
of largest global organizations implement specific vertical
targeted products build on top of its proprietary AI platform.
Nature of Business Artificial Intelligence Platform
Company Website www.couture.ai
Job Designation Product Manager - Decision Science
Job Location Bengaluru
Job Description &
Skills Required
Basic Qualifications:
• Bachelors (+Masters preferred) in Computer
Science/Mathematics/Research (Machine Learning, Deep
Learning, Statistics, Data Mining, Game Theory or core
mathematical areas) from Tier-1 institutes (IITs, IISc, BITS, IIITs,
top-ranked global Universities) with good individual academic
score.
Looking for decision science managers.
• Strong working knowledge of deep learning, machine
learning, and statistics.
- Domain understanding of Personalization, Search, Visual
and Videos.
• Expertise in using Python, statistical/machine learning libs.
• 5+ years of hands-on experience defining large scale AI
solutions and its deployment for millions of customers.
• Expertise with understanding the raw data, data
transformation and feature selection.
• Expertise with understanding of metric-driven impact-analysis
of the AI solutions.
• Ability to think creatively and solve problems.
Key Responsibilities:
• Take ownership of end-to-end engagement from defining to
delivering the business requirements, working with various
stakeholders within Couture AI and customers.
• Create feedback loops to generate the product roadmaps.
• Formulate the appropriate predictive, analytic, ML or Deep
learning solutions leveraging Couture AI Platform.
• Carry out a cost benefit comparison of various solutions, not
only driven by analytical rigor of the solutions, but also the
pragmatic aspects of deployment, data and adoption cost.
• Work with platform and tech teams to implement the
solutions.
• Drive adoption of the Couture AI Platform solutions via
demonstrating improvement in success metrics and interfacing
with business partners from other teams.
• Building business cases and models to quantify new
opportunities, using data and solid business judgment.









