
1. End to End responsibility of multiple product and features 2. Coordinating with the engineering department (Tech Team) to deliver functional solutions 3. Suggesting product enhancements to improve user experience 4. Performing quality assurance controls on products 5. Conduct research to identify customer needs and market gaps 6. Prioritize the implementation of new features and set specific timelines 7. Liaise with the Marketing department to ensure proper advertisement and positioning of new products 8. Monitor and report on users’ reactions after launching 9. Create support and training documents for internal and external users
Perfect Fit candidate for the above role?
1. Academic background: Engineering Only. Non-Engineers are not eligible to apply for this role. 2. Familiarity with market research, consumers’ behavior and marketing techniques. 3. Candidates who have run their own start-up, worked in a start-up or were co-founder in another startup are preferred although not a necessity. 4. Strong time management skills 5. Good communication skills along with the ability to effectively collaborate with cross functional teams.

About WorkIndia
About
Connect with the team
Similar jobs
Job Description :
We are seeking a highly experienced Sr Data Modeler / Solution Architect to join the Data Architecture team at Corporate Office in Bangalore. The ideal candidate will have 4 to 8 years of experience in data modeling and architecture with deep expertise in AWS cloud stack, data warehousing, and enterprise data modeling tools. This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units.
This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modeling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future-ready.
Key Responsibilities:
· Design and deliver conceptual, logical, and physical data models using tools like ERWin.
· Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts.
· Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.).
· Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic.
· Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift.
· Ensure data governance, lineage, and quality mechanisms are established across systems.
· Lead and mentor technical teams in an Agile project delivery model.
· Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping.
· Identify data gaps and availability issues with respect to source systems.
Required Skills & Qualifications:
· Bachelor’s or Master’s degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA).
· Minimum 4 years of experience in data modeling and architecture.
· Proficiency with data modeling tools such as ERWin, with strong knowledge of forward and reverse engineering.
· Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning).
· Strong experience in data warehousing, RDBMS, and ETL tools like AWS Glue, IBM DataStage, or SAP Data Services.
· Hands-on experience with AWS services: Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q.
· Good understanding of reporting tools such as Tableau, Power BI, or AWS QuickSight.
· Exposure to DevOps/CI-CD pipelines, AI/ML, Gen AI, NLP, and polyglot programming is a plus.
· Familiarity with data governance tools (e.g., ORION/EIIG).
· Domain knowledge in Retail, Manufacturing, HR, or Finance preferred.
· Excellent written and verbal communication skills.
Certifications (Preferred):
· AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics – Specialty)
· Data Governance or Data Modeling Certifications (e.g., CDMP, Databricks, or TOGAF)
Mandatory Skills
aws, Technical Architecture, Aiml, SQL, Data Warehousing, Data Modelling
Job Title: Customer Success Manager
Location: Baner, Pune
About Us:
Truein is a fast growing B2B SaaS product company, offering Attendance & Timesheet solutions
to the companies with Contractual and Distributed workforce. 500+ customers across the globe now believe in what we do and have embarked on this journey with us. At Truein, we are on a
mission to bring transparency and controls in the time & attendance process. We leverage Face recognition and AI technologies. We are backed by Investors and a high potential team of 40 people and growing.
Our Culture:
At Truein, we genuinely care about every member we hire. You’ll learn new things regardless of your experience level. We strongly believe in creating value for all stakeholders - our employees, customers, and investors. We foster ownership, and have a dynamic, fun and vibrant startup culture.
Role Overview:
As a Customer Success Manager at Truein, you will play a key role in driving customer goals,revenue expansion and product adoption by ensuring the activation, engagement, success,
retention, and growth for Truein’s key enterprise and mid-market clients. This role is focused on high-impact, high-value activities in all aspects of business development and retention. It is an individual contributor role wherein the person would work with a small but highly competitive and effective team.
Responsibilities:
● Customer onboarding: help them set up product configuration and upload data
● Conduct product training for key stakeholders in the customer organization
● Drive adoption, improve customer satisfaction, and work with our customers to
generate new business (upsell and cross-sell)
● Minimize customer churn, maximize retention and create great customer experience
● Contact customers to explain new features and help them upgrade to new plans
● Collect customer references, testimonials and create case studies
● Be the customer’s voice within Truein providing feedback to our Product team to develop/identify new features
● Conduct quarterly business reviews with senior stakeholders to align product deliveries to client business outcomes
● Help customers map Truein to their existing policies and processes
Requirements:
● 5+ years years of experience in SaaS or software product company in acustomer-facing roles
● Previous experience working with enterprise customers
● Process-oriented and analytical
● Excellent verbal and written communication skills
● Strong empathy for customers and passion for growth
● Passion for solving client challenges and commitment to client delight
● Result-oriented and has great attention to details
● Bachelor's degree or equivalent experience in Computer Science or related field
Good to have:
● Experience dealing with International customers
● Has worked with growth stage startup
● Education background of Technology + Management studies
You will get:
● Competitive compensation package and benefits
● Work closely with and be part of a truly amazing team
● Join a fast-growing company early, make a difference and enjoy the ride
● Challenge yourself and take your career to the next level
Truein is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.



* Work with the product managers and designers and co-own our consumer app (1M+ userbase)
* Own our admin dashboard for all the product offerings - digital gold, gold loan, & gold locker
* Architect, design, and maintain frontend libraries for both our consumer application and admin dashboard
* Mentor team of 3 - 4 frontend developers to build a robust, lightweight, and high-performance client-side app
* Translating designs and wireframes into high quality Typescript code
* Write documentation and guides for consumer app & admin dashboard
Key Qualifications
* Expertise in ReactJS/Redux
* Expert-level knowledge in TypeScript or Flow
* PURE experience of 3- 5 Years and more in Frontend Development
* Expert-level knowledge of developing, shipping, and maintaining Javascript applications
* Knowledge of general software design patterns
* Good understanding of CSR and SSR
* Deep understanding of Javascript
* Up-to-date on the latest build tools, and libraries such as ES6, Webpack, Babel
* Proficient in Javascript with strong object-oriented design skills
* Able to work independently and drive results
Bonus
* Previous work experience in product-based (B2B/B2C) / fintech startups
* Contributed/maintained to an open source library
Their services are available across the globe, with over 65% of their client base being from US, UK, and Canada. The company's primary focus is on Ayurveda and taking the ancient knowledge to anyone who wishes to bring back balance to their health and apply the tools in their everyday life.
- Maintaining a positive, empathetic, and professional attitude toward customers at all times.
- Serving customers by providing product and service information and resolving product and service problems.
- Attracting potential customers by answering product and service questions and suggesting information about other products and services.
- Knowing company's products inside and out so that queries can be handled well
- Handling queries from international customers on call, email & social media.
What you need to have:
- Candidate should be ready to work during night shift (US working time)
- Experience of providing support to international customers on call or chat process preferred
- Excellent fluency in spoken English
- Proven work experience of Customer interactions.
- Capable of speaking in American accent (preferred)
- In-dept understanding of customer service practices
- Excellent written and verbal communication skills in English
- Teamwork and motivational skills\Ability to handle multiple tasks, work in a fast- paced environment and meet deadlines.
- Should have Good follow up skills
- Should be Honest and Self Disciplined
- Excellent Team player and Quick learner
- Graduate
- English Hons. (Preferred)
• Experience working in complex enterprise solutions is a plus.
• Very strong Oracle PL/SQL and SQL are required.
• Strong CSS/JQuery/JavaScript would be needed.
• Experience with Oracle APEX development is required.
• Experience using Bitbucket /GitHub for version controlling

Role Summary: Back-end Developer who would contribute towards building a highly flexible and scalable Back- end by bringing deep core technology expertise.
Job Description:
- Develop modules keeping micro services architecture philosophies in check. - Implement and/or oversee implementation of different modules as part of an integrated development team. - Drive evolution of application performance. - Ensure project scalability by having good project architecture
Skill Requirements:
- Strong Node.js. - Strong database technology skillsets, namely MongoDB, MySQL
- Should have experience of using Rabbit MQ.
- Python (Django) skills are a plus. - A skilled and pragmatic approach.
- Experience with user-centred design, test-driven development, iterative/incremental and agile practices. - Experience with AWS deployment a definite plus.
Individual applying to the role should ideally have the following attributes.
-Passionate about back-end Development and continually follow the platform & innovations
-Strong and innovative approach to problem solving and finding solutions
- Excellent communicator (written and verbal, formal and informal)
- Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution
- Ability to multi-task under pressure and work independently with minimal supervision.
- Ability to prioritize when under pressure.
Looking for a highly skilled MEAN Full-Stack Developer who is comfortable with both front and back end programming.
1+ years in MEAN ( MongoDB, ExpressJS, Angular, NodeJS ) full-stack development.
Responsibilities and Duties
- Design client-side and server-side architecture.
- Design, Develop, and manage well-functioning databases and applications.
- Designing and developing user interactions on web pages.
- Designing and developing APIs.
Required Experience, Skills and Qualifications
- Working experience in MEAN backend (MongoDB, ExpressJS, NodeJS) with very strong JavaScript.
- Thorough Understanding of Angular 8 / 9
- Hands-on experience with JavaScript Development on both client and server-side
- Knowledge on HTML5, CSS3, Bootstrapping, XML and other NoSQL databases like DynamoDB ( Preferred Skill of DynamoDB )
- knowledge on RDBMS like MySQL, SQL Server, etc.
- Distributed Technology Web Services (SOAP, RESTFUL)
- Strong experience in API development & integration, Database Design, third party libraries
- AWS Cloud framework (Serverless) and technologies experience.
- Familiarity with CI/CD for deploying the code.
- Familiarity with code versioning tools such as git along with knowledge about branching, code merging, and code review strategies.
Experience:
- NodeJS: 1 years (Required)
- MongoDB: 1 years (Required)
- Total Work: 1 years (Required)
- Angular ( 7 / 8 / 9 ) : 2 years (Required)
Job Type: Full-time

