
Role & Responsibilities
Lead and deliver designs that are innovative and sophisticated, buildable and scalable, and thoroughly documented.
Be responsible for delivering on our vision and strategy for customers and support the app team in shaping ACKOʼs
roadmap for D2C ensuring design tasks are prioritised in line with the agreed product goals and metrics.
Drive design tasks aligned with our product goals and metrics by collaborating closely with strategy, product, and business teams.
Take ownership of the entire user journey, from ideation and prototyping to user testing and final production, ensuring timely delivery.
Utilise data insights, including metrics and usability studies, to inform and refine design decisions.
Demonstrate a strong understanding of design patterns, best practices, and standards for both iOS and Android platforms.
Effectively communicating and collaborating with engineering teams to ensure seamless implementation of designs

Similar jobs
Requirements:
- Must have experience on VC++ / C++ application programming, Visual Studio.
- Should have thorough knowledge in OOPS (Object-Oriented Programming) concept.
- Should have a strong technical background on Windows or Linux
- Should be familiar with MFC, STL, COM, Multi-threading, Socket programming, Data structures.
- Candidates with Financial domain /capital market experience would be an added advantage.
📢 DATA SOURCING & ANALYSIS EXPERT (L3 Support) – Mumbai 📢
Are you ready to supercharge your Data Engineering career in the financial domain?
We’re seeking a seasoned professional (5–7 years experience) to join our Mumbai team and lead in data sourcing, modelling, and analysis. If you’re passionate about solving complex challenges in Relational & Big Data ecosystems, this role is for you.
What You’ll Be Doing
- Translate business needs into robust data models, program specs, and solutions
- Perform advanced SQL optimization, query tuning, and L3-level issue resolution
- Work across the entire data stack: ETL, Python / Spark, Autosys, and related systems
- Debug, monitor, and improve data pipelines in production
- Collaborate with business, analytics, and engineering teams to deliver dependable data services
What You Should Bring
- 5+ years in financial / fintech / capital markets environment
- Proven expertise in relational databases and big data technologies
- Strong command over SQL tuning, query optimization, indexing, partitioning
- Hands-on experience with ETL pipelines, Spark / PySpark, Python scripting, job scheduling (e.g. Autosys)
- Ability to troubleshoot issues at the L3 level, root cause analysis, performance tuning
- Good communication skills — you’ll coordinate with business users, analytics, and tech teams
POSITION / TITLE: Data Science Lead
Location: Offshore – Hyderabad/Bangalore/Pune
Who are we looking for?
Individuals with 8+ years of experience implementing and managing data science projects. Excellent working knowledge of traditional machine learning and LLM techniques.
The candidate must demonstrate the ability to navigate and advise on complex ML ecosystems from a model building and evaluation perspective. Experience in NLP and chatbots domains is preferred.
We acknowledge the job market is blurring the line between data roles: while software skills are necessary, the emphasis of this position is on data science skills, not on data-, ML- nor software-engineering.
Responsibilities:
· Lead data science and machine learning projects, contributing to model development, optimization and evaluation.
· Perform data cleaning, feature engineering, and exploratory data analysis.
· Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.
· Collaborate with other DS and engineers to deliver projects.
Technical Skills – Must have:
· Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.
· Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.
· Ability to translate complex machine learning problem statements into specific deliverables and requirements.
· Should have worked with major cloud platforms such as AWS, Azure or GCP.
· Working knowledge of SQL and no-SQL databases.
· Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.
· Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.
· Strong understanding of evaluation and monitoring metrics for machine learning projects.
Technical Skills – Good to have:
· Track record of getting ML models into production
· Experience building chatbots.
· Experience with closed and open source LLMs.
· Experience with frameworks and technologies like scikit-learn, BERT, langchain, autogen…
· Certifications or courses in data science.
Education:
· Master’s/Bachelors/PhD Degree in Computer Science, Engineering, Data Science, or a related field.
Process Skills:
· Understanding of Agile and Scrum methodologies.
· Ability to follow SDLC processes and contribute to technical documentation.
Behavioral Skills :
· Self-motivated and capable of working independently with minimal management supervision.
· Well-developed design, analytical & problem-solving skills
· Excellent communication and interpersonal skills.
· Excellent team player, able to work with virtual teams in several time zones.
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Responsibilities:
- Collaborating with data scientists and machine learning engineers to understand their requirements and design scalable, reliable, and efficient machine learning platform solutions.
- Building and maintaining the applications and infrastructure to support end-to-end machine learning workflows, including inference and continual training.
- Developing systems for the definition deployment and operation of the different phases of the machine learning and data life cycles.
- Working within Kubernetes to orchestrate and manage containers, ensuring high availability and fault tolerance of applications.
- Documenting the platform's best practices, guidelines, and standard operating procedures and contributing to knowledge sharing within the team.
Requirements:
- 3+ years of hands-on experience in developing and managing machine learning or data platforms
- Proficiency in programming languages commonly used in machine learning and data applications such as Python, Rust, Bash, Go
- Experience with containerization technologies, such as Docker, and container orchestration platforms like Kubernetes.
- Familiarity with CI/CD pipelines for automated model training and deployment. Basic understanding of DevOps principles and practices.
- Knowledge of data storage solutions and database technologies commonly used in machine learning and data workflows.
Channel Partner Executive / Manager JD
Key Responsibilities
- To develop Channel as a separate entity with defined Sales Target
- Identify and Onboard new channel partners
- Collaborate with sales team
- Manage the performance of existing partners and develop strategies to improve revenue
- Develop and maintain relationships with partners
- Identify and resolve conflicts
- Developing new sales channels
- This assignment begins with Channel development in India and USA
Qualification
- MBA Marketing / IB
- Experience of SAAS product company preferred
- Channel development and management experience
- Proficiency in Customer Relationship Management (CRM)
- Strong problem solving and negotiation skills
The candidate will be part of the Engineering leadership team (IIT/IIM grads) and will help strategise, execute product roadmap.
Work closely with Product and business teams to strategize or design the features and product experiments.
Lead a team of 5-7 Engineers [Mobile and or Backend]
Responsible for engineering delivery in platform & product engineering pods at MediBuddy.
Scale the technology architecture, team and product to drive a 10x growth in next 12-24 months.
Code and Architect key features that form backbone of MediBuddy Mobile Applications
Conduct performance reviews and mentor and guide the team reporting
Drive adoption best engineering practices in the team and the organization
Deliver high quality, scalable and maintainable code at a fast pace.
About Alore
Alore brings revenue velocity to businesses by being their dependable growth operating system.
For further information on what Alore can do for growing businesses, please visit http://alore.io">alore.io
Alore is based out of Singapore with an office at Bangalore.
Who should not apply for this job
- If you are looking for a 100% hike in your salary but can't commit to what value you will bring on the table
- If you never read books
- If you jump companies every 11-12 months
- If you are not comfortable working on Saturdays
- If you have less than 2 years of experience
- If you have more than 4 years of experience
- If you have never worked in a product-led company
CTC:
8.5-10.5 LPA based on experience and on performance in the interview round (70% Fixed - 30% Variable Incentive based on delivery schedule) (We do ZERO deductions since the salary will be paid from Singapore)
At this position you will:
- Get solid experience with high-load distributed architecture using REST and Kafka services.
- Work with automated CI/CD processes, AWS cloud, and VoIP.
- Implement and support microservices based on Spring MVC, Camel, MySQL, Mongo, and Kafka.
Requirement
- At least 1 year of experience with Java Core, Java EE, Spring Core, Spring Boot.
- Not more than 3 years of experience in Java Core, Java EE, Spring Core, Spring Boot.
- Deep understanding of SQL database technologies.
- Experience with Scrum / Agile methodology.
- Willingness to learn and explore new technologies.
- Git: merge, rebase, cherry-pick should be fluent.
- Good level of English B1 or higher.
It would be a nice bonus to have:
-
Experience with distributed architecture.
-
Hands-on experience with Kafka, MongoDB, and Apache Camel.
-
Experience with cloud infrastructure (AWS, Google Cloud, Azure, Heroku, etc.).
-
Experience with client-side development (we use Angular8 for website).
-
A degree in computer science.
About the Backend Developer
We are looking at a diligent, driven, passionate person for our efforts on engagement and retention of our customers (schools) in a high growth environment. Backend Developer will be the face of the organisation for the schools that are using the platform on a day to day basis. The Developer will interact with the schools via the chat support and email requests and resolve any issues that the schools might be facing in using the platform or extend any support they might be needing to use the platform to its fullest.
- As a Backend Developer, you will play a major role in, developing and deploying high-quality web platforms. In this multifaceted role, you will get the opportunity to work along curriculum experts, teachers, and students, and user-test the product in real school settings.
Preferred Qualifications/Skills
* Frameworks & Technologies: Node.js
* Database: PostgreSQL, SQL
* Tools: Git basics, Scripting basics
* Soft Skills: Having a bias towards action, a good sense of design, empathy, and good communication skills
* Excellent written and oral communication skills
* Great interpersonal skills
About Toddle: ( http://www.toddleapp.com/" target="_blank">www.toddleapp.com )
Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
Use data science to size opportunity and product potential for launch of any new product/pilots
Build propensity models using heuristics and campaign performance to maximize efficiency.
Conduct portfolio analysis and establish key metrics for cross sell partnership
Desired profile/Skills:
2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
Solid experience in using python and SQL
Prior work experience in a financial services space would be highly valued
Location: Bangalore/ Ahmedabad












